The Engineering of Biology at MIT

One of the big reunion activities at MIT is Technology Day, a series of lectures from faculty done shortly after commencement.   This year the theme was Synthetic Life, and the talks were just as creepy and interesting as you might expect.

Technology Day panelists, June 10, 2017, Angela Belcher on right

Seven professors spoke on work ranging from designing viruses, to doing self-assembly of nano-structures, to using massive gene editing to find drugs to kill the malaria parasite.

Three new techniques have now made it possible to design lifeforms:

  • Cheap sequencing – the ability to quickly understand what’s in a biological mixture.
  • CRISPR/Cas9 gene editing – a protein borrowed from the cheese-making bacterium Streptococcus thermophilus that lets one replace particular DNA sequences with others.
  • Fast machine pattern recognition – a neural net technique that can find patterns in vast piles of random data.   Recently made feasible by GPU hardware, large datasets, and efficient training algorithms.

CRISPR lets one make a change, the sequencing tells if it takes, and the pattern recognition says if it does something.

Let me summarize the first talks and then concentrate on the last, which was the wackiest:

  • Prof. Linda Griffith wondered why the drug Jun Kinase worked on endometriosis so well in mice, but failed its human trial.  She came up with a way to grow “organoids”, bits of your own tissue grown in a dish, and found that the drug worked great on the 25% of the population with a particular genetic sequence, and failed on the others.  We can now do human trials on bits of our own flesh.
  • Prof. Ron Weiss has found a way to build Boolean logic gates with DNA – structures that when two proteins come in, can express another protein.   Yes, it’s Turing-complete because it can make a NOR gate.  He wants to set up Boolean equations that will let a virus detect the proteins unique to a cancer cell and then turn on and kill it.
  • Prof. Feng Zhang described retro-viral therapy using CRISPR: inject viruses that will edit in naturally occurring sequences that protect against HIV and cardiovascular problems.   To be exact, they can block the production of the protein PCSK9, which inhibits the regulation of cholesterol.   Robert Metcalfe, the inventor of Ethernet, is a backer.
  • Prof. Darrell Irvine notes that the long-term survival rate of cancer used to be really bad, but with T-cell immunotherapy it got up to 25%.  That’s still pretty awful, and the reason is that the cancer can use the body’s own defenses against invaders to attack the introduced T-cells.  He has found a way to attach particles of drugs to the outside of the T-cells, and release them in the environment of the tumor to slow down the attackers.
  • Prof. Jacquin Niles says that the first successful anti-malarial drug, chloroquine, has become useless as the bacteria adapted to it, and the second, artemisinin, is losing too. The disease kills 700,000 people a year, mostly children in Africa.  He’s now going after the bacterium’s genome one gene at a time, clipping them out individually with CRISPR to see which ones the drugs are actually responding to.  Sponsored by the Gates Foundation.
  • Prof. Eric Alm is studying the micro-biome, the set of bacterial species that live in our guts and have mysterious influences over us.  We can now sequence them all and so see if key species are missing.  Problems here could be associated with multiple sclerosis and even autism.  It’s another job for neural nets!   It’s possible to transplant the species if we could just figure out what they do.

And finally, Prof. Angela Belcher talked about doing actual machine fabrication with biological methods.   She notes that abalones manage to produce incredibly tough shells using just the elements they can extract from seawater.  Shouldn’t we be able to make finely-layered materials using similar techniques?

One big accomplishment of her lab was to fabricate the electrodes for a lithium-ion battery using a mat of viruses coated with iron phosphate.   Paper here – Fabricating Genetically Engineered High-Power Lithium Ion Batteries Using Multiple Virus Genes. Actual battery here:

Angela Belcher holds a display of the virus-built battery she helped engineer. The battery -- the silver-colored disc -- is being used to power an LED.

The silver disk is the virus-built battery, and it’s powering an LED. Photo Donna Coveney. Click for story.

They adjusted the genes of the virus so that it would create a protein that could hold onto a carbon nanotube.   The nanotubes are only 1 nm wide and 500 nm long, so nothing else can handle them except a scanning tunneling microscope.   Here they make a billion viruses to pick up a billion  tubes at a time.  Then they dunk them in iron phosphate to actually form the electrode to grab the lithium ions, and the nanotubes make the overall material conductive.  The whole process can be done with common materials at room temperature.   They used the M13 bacteriophage virus, which is otherwise harmless.

What’s even wilder is that they can evolve these capabilities through un-natural selection.   Give each virus a different genome and see if they attach to something on a substrate.   If they don’t, wash them away, then repeat the process with the survivors.   You can try billions of variations after a few cycles.   They’re now trying this on particles of lead.  The surface of a lead crystal has a certain pattern of charges that a protein could lock onto.   Evolve those viruses in this hyper-accelerated way, and you have a way to remove lead from drinking water.   Build a filter out of a mass of these viruses, and pour stuff through it.   Make sure that the viruses themselves can be destroyed in another step!

Overall, what MIT is doing is creating a new field of engineering.   To the usual set of electrical, chemical, mechanical, and civil engineering, they’re adding a new one: biological engineering.  They call it Course 20.  It’s now moving beyond medicine and into actual machine manufacture.    People have used the bio-robots called yeast to make beer and soy sauce for millenia, but now we’ll build all sorts of other things with them.    It’s just the sort of breakthrough that alumni are proud to see their alma mater working on.

Posted in Uncategorized | Tagged , , | Leave a comment

“The Earth After Us” – Humanity in Deep Time

So I was once sitting in an introductory geology class, and the instructor was talking about limestone.  “It’s sometimes formed by calcium carbonate precipitating directly out of seawater,” he said, “Until life evolved 600 million years ago, and it then became largely made of skeletons.”  Six hundred million years – yesterday, really, to a geologist.  You look at this layer and there was no life, and in this layer there is.   You can see the difference in the rock’s fine structure.  The field has an effortless sense of the vast scale of time.

2008, Click for publisher link

So when the geologist Jan Zalasiewicz started thinking about what humanity would leave behind after its inevitable disappearance, he wasn’t thinking about the next century or next millennium.   That got covered nicely in The World Without Us by Alan Weisman (2007), who found that Nature comes back fast and hard in places where people can’t go, like the Korean DMZ and Chernobyl.    Zalasiewicz  wondered about a bigger picture.  Suppose that aliens landed in a hundred million years.   What could they learn about us?

It’s a good excuse to talk about geological processes in general, particularly his own specialty, stratigraphy.  He noted that most of the crust of the earth is made of rotted rocks, ones that have been broken to bits, laid down in sediments, and then cycled up and down by tectonics, sometimes multiple times.  This means that nothing of ours that’s inland will survive.   All inland cities will be eroded down to nothing by wind and water.   Higher latitude ones will be ground into sand by glaciers.   Boston and New York certainly won’t make it, since their locations have already been planed down by several rounds of Ice Ages.

The only remnants likely to survive will be ones that are built on river deltas, since they’ll get buried in sediment as the sea rises.  These are cities like New Orleans, Shanghai, and Venice.    New Orleans is below the level of the Mississippi already, and is likely to be all that ever remains of America.  It’s important that things get buried quickly before wave action turns them into beach sand and spreads them over the sea floor. It’s these former swamps where paleontologists find skeletons today, not the uplands where the dinosaur herds used to graze.

Cycad trunk and me in 1998 when I was thinner and had worse taste in hats

I visited one of these sites myself at the Joggins Fossil Cliffs in Nova Scotia.   This was a swamp that got buried in a sudden mud flow 310 million years ago, in the early Carboniferous.  It happened so suddenly that a complete skeleton of an early reptile, Hylonomus Lyelli, was found in a hollowed out trunk.  Oxygen levels were much higher then, and arthropods grew to enormous size; there’s a millipede track there that’s a foot wide.   Those trees made the area a coal mining center until the 1950s when a decline in demand and several disasters closed the mines.

Cities that do get buried in this way will form what he calls an Urban Stratum, a smashed mix of concrete and rusted-away rebar.   For each person in the developed world, about 500 tons of  sand, gravel, limestone, brick clay, and asphalt is used in the course of that person’s lifetime.   We each leave a lot more behind than a ten-ton dinosaur!  Other organic material like plastics and wood in this stratum will get cooked into oil.   Things like pure aluminum are never found in nature, but will occur here.

There will be other, larger traces of our presence.   There is an ongoing die-off of coral reefs, which themselves can form undersea mountains called carbonate platforms.  There has been a sudden disappearance of large mammals over most of the earth.  More distinctively, species that used to be found on only one continent, like rats, are now everywhere, causing a large loss in species diversity.   There’s a nice phrase for this, “Planet of Weeds”, where the species that are the most adaptable and fastest-growing are taking over.

Our own bones won’t leave as much of a record.   The dinosaur age lasted a hundred million years and we still don’t find that many.  He does estimate that the total living mass of humanity is now similar to that of the dinosaurs in the Jurassic.  Bodies in wooden coffins won’t last, but stone ones buried without oxygen might.   He notes that gangsters in concrete overcoats dropped into the Hudson have a good chance of survival, complete with impressions of spats and fedoras.

Nothing of human culture would remain – all art and books and electronic media will be lost.  About the most that the alien paleontologists could infer is that humans took care of their own, since there would be elderly skeletons with healed fractures.  On the other hand, many of the bodies we do find are victims of violence, like the 5300-year old Otzi, who had an arrow in his shoulder and a skull fracture, or the 9000-year old La Brea Woman from the tar pits, who had part of her skull missing.  Bodies found in obscure places generally did not die peacefully.

Spiral view of earth history before the Anthropocene, (c) Astrobio Magazine, click for larger view

Zalasiewicz is actually the chair of the Working Group of the Anthropocene, part of the International Commission on Stratigraphy.   In August 2016 they voted to officially recommend it as the name of a new geological age.  They think that it’s most distinctive marker are the radioactive elements dispersed by atmospheric nuclear testing.   This only lasted from 1945 to 1980, but spread carbon-14 and plutonium over the whole world, a phenomenon now known as the Bomb Pulse.   It’s actually a useful marker for determining the age of trees and bones.   The half-life of these elements is short in geological terms, but the decay products are distinctive.

One thing that Zalasiewicz does not talk about is ruins left in space, since that’s outside the purview of geology.   Satellites below about 2000 km will eventually decay into the atmosphere, but ones at geosynchronous orbits will be there forever, although they’ll drift into stable longitudinal locations above 75E and 105W.   There is also a fair amount of debris on the Moon – 46 missions have impacted there.   The rate has been low in recent decades because The Moon is Dull.   There actually is erosion on the Moon due to a steady rain of micro-meteroites, but I don’t think anyone knows if that will wear away stuff like the large Apollo artifacts over geological spans of time.

The other thing he doesn’t talk about is what actually happens to humanity.  It’s clear that we can’t go on living as we have been – the earth won’t support it.  Yet there’s no telling what a stable, sustainable human culture would look like, since there’s never been one.   Even the nature-loving Indians changed the whole Midwest from forest to prairie by means of fire.  We could go extinct, of course, but that would be a boring outcome, and pretty unlikely given how tough we now are.  We could now survive even the dinosaur-killer asteroid by stockpiling canned food in caverns.   One stable outcome would be to split into multiple species that become a predator-prey ecosystem like that of the Morlocks and Eloi in Wells’ The Time Machine.   Another would be to cycle endlessly between pastoral and industrial modes like Pournelle and Niven’s aliens in The Mote in God’s Eye.  They’ve been technological for so long that they’ve evolved an instinctual understanding of machinery, and go from hunter-gathering to nuclear war every five hundred years.

Science fiction is full of other ideas, but the bottom line is that we aren’t going to be here over the spans of time that geologists work on.  We’ll be just another unusual episode in the long and bizarre history of the earth.   Here the limestone came from seawater, here it came from diatom skeletons, and in this narrow layer it came from concrete.    It’ll be just one more oddity that the geologists deal with, like so many others.




Posted in Uncategorized | Leave a comment

Boston, City of Other People’s Bad Memories

My home city has had a remarkably long run of peace.  The last time war touched Boston was when it was besieged for almost a year by the colonials during the American Revolution, some 240 years ago.   Few places in the world have been at peace for so long – not the US South or West, nor Asia or India, nor most of Europe.  Even New York was damaged by riots in the Civil War, and by 9/11.

Perhaps that’s why Boston is filled with memorials to other people’s tragedies.  The events have nothing to do with Boston itself, but refugees from them have washed up on its calm shore.  Even generations later, they still want to tell their stories.  The city obliges.   It tracks all public art at this site, which I’ve used for the links below.

Here, for instance, is the New England Holocaust Memorial (sculpture 1995, event 1939-1945):

It’s the largest memorial in the city, and is squarely in its oldest section.  It looks deliberately jarring amongst all the old brick.   There’s one tower for each million victims, and they’re covered in numbers, meant to evoke the camp tattoos.

Not far away is the Boston Irish Famine Memorial (sculpture 1998, event 1845):

The healthy figures are leaving the starving ones behind, a disturbing image.

Rather more obscure are the Hungarian Revolution Freedom Fighters (sculpture 1986, event 1956):

This was from when the Soviets crushed an uprising in Hungary in the early Cold War.

Then there’s the Armenian Heritage Monument, commemorating the “immigrant experience”, but mainly about the Armenian Genocide.  Scuplture 2012, event 1915-1923

The Turks and Armenians have been at it for a very long time, and are still in conflict.

Here’s the Polish Partisans Memorial (sculpture 1979, event 1941): This remembers the heroic and doomed resistance to the Nazi and Soviet invasions of WW II.  The sculpture had been on the Boston Common, but perhaps was too obscure and grim for the city’s main public space, and so is now near the World Trade Center T stop.

The city has plenty of memorials to its own past figures, of course, but they’re generally uplifting in some way.   These are all reminders of tragedy.  Note that they’re relatively new, all from the last couple of decades.   Remembering a past loss is a recent trend.   Is it a memento mori for groups that have done quite well in the new world?  A reminder to other Bostonians that this could happen to them?

More recent immigrant waves have their own tragedies: the Vietnamese in the early 70s, the Cambodians in the early 80s, the Russians fleeing the death of their dream, the Chechens not long after (and the city has anti-memorials to them from the Marathon bombing), Haitians, Somalians, and even students from Tiananmen Square.   They don’t yet have the political clout to present their own stories, but they will at some point.

The city and its universities have more volumes in their libraries (about 40 million) than anywhere else in the world, except New York City (50 million).  It already acts as a global memory.  It’s acting as a sculptural memory too.

Posted in Uncategorized | Leave a comment

Plug For “Let’s See This Work – An Engineer At the Movies”

Alec Guinnes in one of the best STEM films, “The Man in the White Suit”

I’ve started a new blog that’s specific to movies, looking at them with a technical eye:  Let’s See This Work .  It’s not so much about the technical construction of movies, but rather notes from a technical person on what movies are about.    There’s a main post, STEM Movie List,  that gives notes and links to films with Science / Technology / Engineering / Mathematics characters, but most of the posts are reviews of films and their themes.   Others are analysis: The Funniest Screenwriter of All Time, Movie Inventors – Oddballs, or a summation of IMDB keywords in What Are Movies About?  The older posts are carried over from this blog.  Hope you enjoy them!

Posted in Uncategorized | Tagged | Leave a comment

How Did Wind Get So Cheap?

Rather surprisingly, wind power is now the cheapest form of electricity in the US:

Lazard LCOE Dec 2016. Click to enlarge.

This comes from the tree-hugging socialists at  Lazard Asset Management.  Unsubsidized wind comes in at $32 to $62 per MWh, depending on the site.   Natural gas is substantially more at $48 to $76, and coal and nuclear are right out at $60 to $143 and $97 to $136 respectively.  Wind is now generating 5% of all power in the US, and 45% in Denmark.  Its cost has fallen by 1/3 in the last five years.  Solar gets all the publicity, perhaps because those black panels look magical compared to the simple mechanics of a wind turbine, but wind is actually ahead, especially in the Midwest and Northeast.

So how did wind mills change from being a stereotype of communes to major industrial machinery?  It looks like a combination of standard engineering improvements and quite non-standard citizen involvement in Denmark.  The story gets told in the comprehensive new book “Wind Energy For the Rest of Us” by Paul Gipe:

Click for author site

Click for author site

It discusses the historical development of turbines since the 19th century, the reasons for their key features, how to evaluate a site, how to choose a machine, and how to finance it.  It ends by noting that it would take about 750,000 standard 2 MW turbines to power all of the US.  They’re about as hard to build as a heavy truck, and much easier than aircraft, and so could not only save money and cut pollution but revitalize manufacturing.

A key reason why they’re getting cheaper is geometrical scaling – a wind turbine that’s twice as big sweeps out four times the area and so gets four times the power.  If it costs less than four times as much to build, you’re ahead.  Every tweak that makes the blades longer or the towers taller is a win.  They’re now up to 195 m tall, with 80 m blades and a peak power of 8 MW, although those are only used off-shore.

Overall it looks like their development followed the typical pattern for a new machine: try lots of different approaches initially, then narrow down to an optimum one and drive for high volume and low cost.   The standard configuration today is an open rotor with three fiberglass blades on a horizontal axle facing upwind.  They have overspeed protection by twisting the blades and having brakes on the axle.   They feed a variable-speed under-sized generator, and it’s all on a tall cylindrical steel tower.   Every variation on these features has been tried:

  • Put a duct around the rotor to increase wind speed (too heavy) or have an open rotor.
  • The blade number: one (too heavy), two (too much vibration), four (too heavy), or three.
  • The blade material: wood (rots), steel (too heavy), aluminum (too weak) or fiberglass (a hollow shell around a structural spar.  Does have to be made as one piece, making it hard to transport).
  • The axle orientation: vertical (can’t furl in high winds) or horizontal (turn sideways to high winds to lessen the force on the rotor).
  • The blade direction: facing downwind (makes a whomp noise every time a blade goes through the wind shadow of the tower) or upwind (have to worry about the blades hitting the tower, which is highly bad).
  • For overspeed protection: little parachutes on the tips (really?), slats on the blades (too prone to jam), twist the whole blade to make the airfoil stall and put a brake on the axle as a last resort.
  • Use a fixed-speed generator to match the 60 Hz of the grid (needs variable gearing or loses energy by turning rotor too slowly) or a variable-speed generator that’s converted to AC (possible with modern high-power switching transistors).
  • Size the generator to handle high winds (adds cost and weight) or size for only medium winds (improves the average power output since it runs full out more of the time, thus needing less backup.   The average power is now up to 40-50% of the rated limit versus 30% with the old over-sized generators).
  • The tower style: a lattice mast like a radio tower (rusts, gets covered in bird shit, makes wind noise, unsafe to climb), use guy wires (noise, needs land), or have a closed cylinder (which can be climbed in any weather).
  • The tower material: wood (can’t get tall enough), concrete (ugly and slow to assemble), steel.

So it took a long time to figure it all out!  But now that the major parameters are set, you can really go to town on optimization.  If you’re building $10B of wind turbines a year (as happened in 2016 in the US), an 0.1% improvement is worth $10M, which sure pays for your salary.  For example, the Betz Limit says the maximum amount of power that can be taken out of an air stream is (1 – (2/3)^3) = ~60% of the kinetic energy in the stream.   Modern turbines get within 75% of that, so a tweak of the airfoil shape (perhaps depending on the wind speed distribution at a site) could get another percent or two.

And now that people have a lot of experience with the designs, they can estimate how long they’ll really last.   Early machines failed after only a couple of years, but modern ones last for twenty.  That makes a big difference in the financing, since you no longer need a risk premium for early failure.

What’s more interesting, though, is that it got started in a quite unconventional way – at a folk craft / technical school in Denmark, the Tvind School.   They built the first modern wind turbine, the Tvindkraft, with secondhand parts and student labor:

900 kW turbine and the Tvind School in Jutland. Click for link

900 kW turbine and the Tvind School in Jutland. Click for link

It’s still working after almost 40 years!  It pioneered the use of cantilevered fiberglass blades and a particular kind of flange for the blades that uses glass fibers wrapped around the bolt holes.  It’s big, 900 kW, and generated so much power that the grid couldn’t handle it, and so they dumped it into heating the school.

It was built during the oil crises of the 1970s.  The Danes had to buy oil from the Mideast and coal from Germany, and liked neither option.   Nor did they like buying nuclear-generated electricity from Sweden, who put a reactor immediately across the strait from them.  It had to be wind, but as a small country they didn’t have the resources to do a big research program.

The US had revived wind research at the same time, but gave it to NASA, GE, and Boeing.  They applied aerospace ideas to it, thinking that it was just another airfoil.  But it’s not – weight doesn’t matter much.   What really matters is reliability, and the GE and Boeing machines failed after less than a year.

The Danes came at it from a cost and reliability standpoint instead of performance.  They concentrated on making the blades strong and durable – each of the Tvindkraft blades weighs about five tons.   Even more importantly, they published their designs. A lot of small firms sprang up making similar-looking mills.   Some even specialized in making just pieces like the blades.  Danish windmill owners also joined up in cooperatives and forced the builders to have standard features likes brakes on the generator.  Some people didn’t like the look of the towers, but once they got a share of the revenue they minded much less.

Eventually a farm-machinery company, Vestas, got involved and took it over.   They’re now about the largest wind company in the world, with 2016 sales of about 10 billion euros.  They’re closely followed by another Danish company, Bonus Energy, now a part of Siemens.  GE is still a large player, and is covering the Midwest in towers.  The Chinese bought Danish and German tech and now dominate the field, since they’re desperate to get away from coal.  At the end of 2015 there were more than 300,000 turbines operating around the world, with a capacity of more than 430 GW, and that grew by 63 GW in just that year.

So this huge industry started at an obscure school in Denmark, grew to a cluster of small companies, got taken up by some larger ones, was incentivized by feed-in tariffs in Europe and investment tax credits in the US, and over 40 years worked out all the kinks in the technology.  It benefited from natural geometric scaling, from airfoil simulations, from power electronics, and most of all from operational experience.  Another factor of two in cost reduction is in sight, and off-shore tech is coming on strong.   It’s the newest class of large machinery in the world (rockets and nuclear reactors come from the 50s, and jets and container ships from the 60s), and it’s taking over.

Posted in Uncategorized | Tagged , | Leave a comment

The Auto Industry Does Its Bit

I recently leased a 2017 Chevy Volt.   It’s a nice mid-range car with good interior space, a lot of zip, and is really quiet.   It’s even rather stylish:


And it gets 75 miles per gallon in terms of CO2 emissions, 3X the US new vehicle average.  That is, it gets 40 mpg when running on gas, and the equivalent of  90 mpg when running on electricity at 2.7 miles per kilowatt-hour.   A kilowatt-hour is about what a small air conditioner consumes every two hours.   I drive on electricity about 85% of the time, so that averages out to 75.

The number is so high because Massachusetts has pretty clean power.  The EPA tracks this here: EPA Power Profiler.    It says that MA burns about 50% natural gas with the rest as nuclear (30%), hydro (6%), wind (4%), coal (3%), landfill gas (2%), and some solar photovoltaic and biomass.  The US as a whole averages about 55 mpg because they burn a lot more coal.   This data is all from 2012, though, and coal is way down since then.  MA has dropped its emissions from electricity by almost a factor of 2 between a peak in 2007 and 2013, according to the the state tracking site here: MA GHG Emission Trends.

Electric car sales are growing fast.   About 160,000 battery-electric and plug-in hybrids were sold in the US in 2016, a 37% increase over 2015.  That’s still only 2% of overall US car sales of ~7M, and only 1% of car and truck sales of ~17M, but it’s a lot.  At $50K per car, about $8B of electric cars were sold last year, which is only a little smaller than the movie industry.   The breakdown by model is:


It’s nice that the top 4 models are American, and that the Volt and Fusion Energi are  union-made.   These were record years for the Teslas and Volts.

The common rule is that greenhouse gas emission have to drop by 80% by 2050 compared to 2005 levels in order to keep global warming under 2 degrees C.  The average new car and truck in 2005 got about 20 mpg, or 5 gallons per 100 miles, and this car does about 1.3 gallons/100 miles,  a 75% drop.   It’s almost there already!  The EU currently has a limit for new cars of 130 gm CO2 per km (42 mpg), and is going to 95 gm/km in 2021.  This car does about 62 gm/km, and will get cleaner still as the the power system de-carbonizes.

So don’t blame the auto industry for climate change going forward.  They’re offering mid-priced, comfortably appointed, well-driving cars that are way ahead of government regulations and about up to future requirements.  This car even feels much better when it’s running on the battery.   It’s smoother, quieter, and has more acceleration.   When the gas engine comes on, I’m reminded of what 20th century cars felt like.   It feels like phones that had to be wired to the wall, TVs that only played shows when it wanted to, and information that you had to go to the library to find.  This feels like a 21st century car.


Posted in Uncategorized | Tagged | Leave a comment

When Modeling Goes Bad – “Weapons of Math Destruction”

The political modeling that I talked about in the last post now affects most decisions that institutions make with respect to individuals.   This is nicely described in in the recent book Weapons of Math Destruction by Cathy O’Neil.   She has worked on these systems herself while at the hedge fund D. E. Shaw and at various e-commerce startups.


Clink for link to author blog

This modeling attempts to classify millions of people based on anything that can be gleaned about them online.  She has chapters on each of these categories of decisions:

  • College Admissions – are driven by metrics related to the US News and World Reports rankings, which conveniently don’t include tuition.
  • Sentencing and Parole – Who is likely to commit more crimes before and after jail time?
  • Hiring – Study people’s social media, credit scores, and judicial records to see if they’re a good match for a firm.
  • Firing – Teachers are especially closely judged these days because of right-wing opposition to the whole concept of public schools.   In particular, the No Child Left Behind Act almost forces teachers to be ranked and fired.   This has had the predictable consequences of teachers leaving low-performing school districts, skewing lessons towards the tests, and cheating.
  • Borrowing – How are credit scores actually arrived upon?  FICO is actually a clear and straightforward metric, but lots of banks use mysterious e-scores these days.
  • Insurance – Who gets covered and for how much?  The ACA forced consistent standards on the medical insurance industry, but that’s about to disappear.
  • Voting – How can the news and advertising that people see be tuned to persuade them to vote one way or the other?  She actually discusses the work of Cambridge Analytica, which had a role in Trump’s victory, even though the book came out long before the election.

These decisions are largely made by computer these days because it’s cheap.   Interviewing students or borrowers or applicants takes real people with real skills, and that’s more expensive than just screening them by algorithm.   That means it gets done for the upper classes, who otherwise get annoyed by impersonal rejection, but not for the middle class and below.

But cheap methods are usually crummy, and that’s true here too.  They don’t have nearly enough statistical power to do a good job.   They’re using way too few data points (E.g. teacher evaluations are based on only 20 or 30 scores from wildly different children), are using proxies that have no real connection to what’s being decided, and have poor feedback paths to adjust the models.

Worse still, the methods are completely opaque to the people they are affecting, and often to the people using them.    An answer spits out, and there’s no recourse.  No one knows why they got turned down.   If they’re using a neural net, even the coders don’t know why it gives the answers it does.

Even worse still, the goal of the algorithm is entirely for the benefit of the organization running it.   No larger social goal can be applied, nor can any larger sense of fairness.  Thus the algorithm can easily cause death spirals. E.g. by denying mortgages to certain neighborhoods, the area declines, making it less attractive for investment, causing further declines.  By denying people bail or parole, whole classes of people can be put in decline.  The algorithm may optimize the short-term profit of the people running it, but is too mysterious to service long-term goals even for them, much less society as a whole.

She contrasts this with player evaluations in major league sports.  The statistics about a player’s performance are all publicly known, and are plentiful if they’ve been in the game for any time.  They are directly related to the main question – how much will this player help the team win?   The model can be constantly run to verify its predictions, and adjusted when wrong.   But if you have a model for who makes a good hire, you really only get to see a little about who gets picked, and then only at infrequent reviews.  You don’t learn anything about the people rejected.

So what is to be done?  Her suggestions don’t seem that helpful to me.  She focuses on an ethics code for programmers of such algorithms.    That’s been a valuable approach in civil engineering, where people really are conscious that their work can kill when it fails.  Few other engineering disciplines insist on this, though.  The connection between one’s work and its consequences is much more remote in programming than it is in construction.

It’s better to have public and independent inspections.   That’s how bridges get certified.   It’s coming to be how components get certified in cars and airplanes.  An outside party reviews the design process and the safety behavior of a device and gives it a rating.

That’s hard for big software systems like these, especially since they’re considered to be a business advantage.  What people can do is test the system with simulated applications.  She describes how researchers can create fake online personas to see how their social media gets steered, or their search results, or their college and loan applications.

The Big Data companies like Facebook and Google hate this, though, and do everything they can to prevent it.  They don’t want people to know how they’re being judged, for fear that users will game the results.   There’ll be an arms race between the parties trying to understand what Big Data is doing to society and the increasingly malevolent firms themselves.

Anyway, the book as a whole is clearly written and thorough.  Her blog is excellent too!   It’s a good overview of a problem that will only get worse.

Posted in Uncategorized | Tagged , | 1 Comment