The Auto Industry Does Its Bit

I recently leased a 2017 Chevy Volt.   It’s a nice mid-range car with good interior space, a lot of zip, and is really quiet.   It’s even rather stylish:


And it gets 75 miles per gallon in terms of CO2 emissions, 3X the US new vehicle average.  That is, it gets 40 mpg when running on gas, and the equivalent of  90 mpg when running on electricity at 2.7 miles per kilowatt-hour.   A kilowatt-hour is about what a small air conditioner consumes every two hours.   I drive on electricity about 85% of the time, so that averages out to 75.

The number is so high because Massachusetts has pretty clean power.  The EPA tracks this here: EPA Power Profiler.    It says that MA burns about 50% natural gas with the rest as nuclear (30%), hydro (6%), wind (4%), coal (3%), landfill gas (2%), and some solar photovoltaic and biomass.  The US as a whole averages about 55 mpg because they burn a lot more coal.   This data is all from 2012, though, and coal is way down since then.  MA has dropped its emissions from electricity by almost a factor of 2 between a peak in 2007 and 2013, according to the the state tracking site here: MA GHG Emission Trends.

Electric car sales are growing fast.   About 160,000 battery-electric and plug-in hybrids were sold in the US in 2016, a 37% increase over 2015.  That’s still only 2% of overall US car sales of ~7M, and only 1% of car and truck sales of ~17M, but it’s a lot.  At $50K per car, about $8B of electric cars were sold last year, which is only a little smaller than the movie industry.   The breakdown by model is:


It’s nice that the top 4 models are American, and that the Volt and Fusion Energi are  union-made.   These were record years for the Teslas and Volts.

The common rule is that greenhouse gas emission have to drop by 80% by 2050 compared to 2005 levels in order to keep global warming under 2 degrees C.  The average new car and truck in 2005 got about 20 mpg, or 5 gallons per 100 miles, and this car does about 1.3 gallons/100 miles,  a 75% drop.   It’s almost there already!  The EU currently has a limit for new cars of 130 gm CO2 per km (42 mpg), and is going to 95 gm/km in 2021.  This car does about 62 gm/km, and will get cleaner still as the the power system de-carbonizes.

So don’t blame the auto industry for climate change going forward.  They’re offering mid-priced, comfortably appointed, well-driving cars that are way ahead of government regulations and about up to future requirements.  This car even feels much better when it’s running on the battery.   It’s smoother, quieter, and has more acceleration.   When the gas engine comes on, I’m reminded of what 20th century cars felt like.   It feels like phones that had to be wired to the wall, TVs that only played shows when it wanted to, and information that you had to go to the library to find.  This feels like a 21st century car.


Posted in Uncategorized | Tagged | Leave a comment

When Modeling Goes Bad – “Weapons of Math Destruction”

The political modeling that I talked about in the last post now affects most decisions that institutions make with respect to individuals.   This is nicely described in in the recent book Weapons of Math Destruction by Cathy O’Neil.   She has worked on these systems herself while at the hedge fund D. E. Shaw and at various e-commerce startups.


Clink for link to author blog

This modeling attempts to classify millions of people based on anything that can be gleaned about them online.  She has chapters on each of these categories of decisions:

  • College Admissions – are driven by metrics related to the US News and World Reports rankings, which conveniently don’t include tuition.
  • Sentencing and Parole – Who is likely to commit more crimes before and after jail time?
  • Hiring – Study people’s social media, credit scores, and judicial records to see if they’re a good match for a firm.
  • Firing – Teachers are especially closely judged these days because of right-wing opposition to the whole concept of public schools.   In particular, the No Child Left Behind Act almost forces teachers to be ranked and fired.   This has had the predictable consequences of teachers leaving low-performing school districts, skewing lessons towards the tests, and cheating.
  • Borrowing – How are credit scores actually arrived upon?  FICO is actually a clear and straightforward metric, but lots of banks use mysterious e-scores these days.
  • Insurance – Who gets covered and for how much?  The ACA forced consistent standards on the medical insurance industry, but that’s about to disappear.
  • Voting – How can the news and advertising that people see be tuned to persuade them to vote one way or the other?  She actually discusses the work of Cambridge Analytica, which had a role in Trump’s victory, even though the book came out long before the election.

These decisions are largely made by computer these days because it’s cheap.   Interviewing students or borrowers or applicants takes real people with real skills, and that’s more expensive than just screening them by algorithm.   That means it gets done for the upper classes, who otherwise get annoyed by impersonal rejection, but not for the middle class and below.

But cheap methods are usually crummy, and that’s true here too.  They don’t have nearly enough statistical power to do a good job.   They’re using way too few data points (E.g. teacher evaluations are based on only 20 or 30 scores from wildly different children), are using proxies that have no real connection to what’s being decided, and have poor feedback paths to adjust the models.

Worse still, the methods are completely opaque to the people they are affecting, and often to the people using them.    An answer spits out, and there’s no recourse.  No one knows why they got turned down.   If they’re using a neural net, even the coders don’t know why it gives the answers it does.

Even worse still, the goal of the algorithm is entirely for the benefit of the organization running it.   No larger social goal can be applied, nor can any larger sense of fairness.  Thus the algorithm can easily cause death spirals. E.g. by denying mortgages to certain neighborhoods, the area declines, making it less attractive for investment, causing further declines.  By denying people bail or parole, whole classes of people can be put in decline.  The algorithm may optimize the short-term profit of the people running it, but is too mysterious to service long-term goals even for them, much less society as a whole.

She contrasts this with player evaluations in major league sports.  The statistics about a player’s performance are all publicly known, and are plentiful if they’ve been in the game for any time.  They are directly related to the main question – how much will this player help the team win?   The model can be constantly run to verify its predictions, and adjusted when wrong.   But if you have a model for who makes a good hire, you really only get to see a little about who gets picked, and then only at infrequent reviews.  You don’t learn anything about the people rejected.

So what is to be done?  Her suggestions don’t seem that helpful to me.  She focuses on an ethics code for programmers of such algorithms.    That’s been a valuable approach in civil engineering, where people really are conscious that their work can kill when it fails.  Few other engineering disciplines insist on this, though.  The connection between one’s work and its consequences is much more remote in programming than it is in construction.

It’s better to have public and independent inspections.   That’s how bridges get certified.   It’s coming to be how components get certified in cars and airplanes.  An outside party reviews the design process and the safety behavior of a device and gives it a rating.

That’s hard for big software systems like these, especially since they’re considered to be a business advantage.  What people can do is test the system with simulated applications.  She describes how researchers can create fake online personas to see how their social media gets steered, or their search results, or their college and loan applications.

The Big Data companies like Facebook and Google hate this, though, and do everything they can to prevent it.  They don’t want people to know how they’re being judged, for fear that users will game the results.   There’ll be an arms race between the parties trying to understand what Big Data is doing to society and the increasingly malevolent firms themselves.

Anyway, the book as a whole is clearly written and thorough.  Her blog is excellent too!   It’s a good overview of a problem that will only get worse.

Posted in Uncategorized | Tagged , | Leave a comment

Weaponized Psychology Helped Elect Trump

The US has just elected a president who is an outright criminal – a man who cheats contractors, steals from investors, and assaults women. What on earth happened?  Everyone has a theory, but let me add one more – his campaign made use of weaponized psychology.   I noted in my review of the SF novel Affinities that large data sets and serious mathematical analysis were getting traction even in the most difficult of subjects like psychology.  We’re now seeing real-world consequences of these advances.

Analytica CEO Alexander Nix, which is a good supervillain name

Analytica CEO Alexander Nix, which is a good supervillain name

To be specific, Trump used the services of a company called Cambridge Analytica to do his voter analysis and message management.  They are a US subsidiary of a UK firm with the blandly sinister name Strategic Communications Laboratories.  They’ve been conducting propaganda and disinformation campaigns all over the world for the last twenty years, often for the US DoD and UK MoD.  They worked on the Leave side of Brexit.

Cambridge Analytica was backed in the US by one Robert Mercer, a right-wing hedge fund billionaire with a PhD in computer science from the University of Illinois.   He was a major backer of Ted Cruz, but even CA couldn’t save that campaign.   The standard joke was “Why do people take an instant dislike to Ted Cruz?  It saves time.”  When Cruz folded up, Mercer shifted his funding to Trump.   His daughter Rebekah is now a member of Trump’s transition team.  Trump’s former campaign advisor Paul Manafort was apparently against hiring CA, but was overruled by Jared Kushner, Trump’s son-in-law. Mercer himself spends money on teenage-boy projects like huge model railroad sets, a collection of machine guns including that of the robot in “Terminator”, and enormous yachts.  He believes the US should return to the gold standard.   He was sued by his mansion staff for stiffing them on pay, and so should fit right in at the new Administration.

CA’s job was to find the narrow path to electoral college victory for Trump.   Clinton had wrapped up the Northeast and Far West, and most of the South and the Mountain West was solidly Trump, but the upper Midwest could be exploited.  CA signed up with Facebook and got access to the profiles of hundreds of millions of voters.   They built models of each voter based on the OCEAN personality profile system.   This stands for Openess to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.  It differs from other systems such as Myers-Briggs in that it arose from factor analysis, of grouping people by common traits, instead of having an underlying theory.  A nice feature of it is that the traits can be inferred from writings and links instead of requiring questionnaires.

Once you know the personality types you’re dealing with, you can judge the effect of political messages on them.   Again, you use feedback from social media to estimate how well an approach is working.  In the weeks before the election, CA saw that early voter turnout was higher among older, rural white voters and lower among blacks than expected.  They reset their poll weightings and saw their opportunity.  They did big ad buys in the northern Midwest and advised Trump to focus there.  He changed his campaign schedule to include stops in Michigan.   Pundits thought that was crazy, since Michigan was solid blue, but he actually took the state.

In the end it took only a hundred thousand votes to swing Wisconsin, Michigan, and Pennsylvania.  That’s less than 0.1 % of the votes cast, but it will change the entire direction of the country for the next four years, and likely well beyond.   That’s the point of leverage that this kind of analysis can find.

Now, the Clinton campaign had their own analytics firm, Timshel, founded by a veteran of Obama’s 2012 campaign, Michael Slaby.   Obama’s campaign was also notable for its use of Big Data to try to capture the intent of every single voter.  I remember some discussion about who Trump would use early in the campaign, and the consensus was that no serious technical would ruin their reputation by associating with Trump.   The only exception was Peter Thiel and his mysterious Palantir Technologies, but they don’t seem to have gotten involved.   What Trump’s victory showed is that this kind of technology can be used by either side.   It’s not something that only sophisticated progressives can handle.  Thinking that was snobbish.

Was it an important factor in his win?   Maybe not compared to anti-Clinton misogyny, xenophobia, interference by Russia and the FBI, weariness with eight years of Dem rule or any of the other swirl of explanations.   Any or all of them could have contributed.   What’s likely, though, is that this level of manipulation will only increase.  You may think you’re voting based on a rational analysis of the issues, but what you see and hear will be adjusted for you personally by vast systems driven by models of your psyche.   It may sound like wild conspiracy theory, but people are making businesses out of it.

Update 2/27/17 – The Guardian reports that Robert Mercer loaned the services of Cambridge Analytica to, the pro-Brexit organization headed by Nigel Farage.  The expense was not reported, which is illegal. He is also an investor in Breitbart News and recommended Steve Bannon to Trump.   The Right now has another deep-pocketed backer besides Adelson and the Kochs, one with dangerous technology.


Posted in Uncategorized | Tagged , | 1 Comment

The Winningest SF Authors Are Women

Item Number bb004400

Le Guin in the 1970s, click for bio

The New Yorker recently published a charming interview by Julie Phillips of Ursula K. Le Guin.  It described her upbringing in a house full of myth and story headed by her father the great anthropologist Alfred Kroeber, her difficult relationship with Radcliffe where she got a degree in French, and her brilliant spurt of work starting in 1966 at age 37 with “A Wizard of Earthsea”, and extending over the next 8 years to the seminal works “The Left Hand of Darkness”, “The Lathe of Heaven”, “The Farthest Shore” and “The Dispossessed”.  She’s now 87 and as sharp as ever, tangling with Amazon over monopoly and Google over the digitization of literature.

“A Wizard of Earthsea” made a particular impression on me when I read it at age 13.  It starts with the standard SF trope of the Big Zoom.  That’s where a young person from a humdrum background comes to realize over the course of the story just how big and wonderful the world actually is.   That’s the plot of Heinlein juveniles like “Have Space Suit – Will Travel”, whose title alone tells you what’s going to happen.  Usually those stories have the protagonist going from misunderstood loner to reaching their proper place in the world, which makes them highly satisfying to young fans.  In Wizard, the young hero Ged does in fact become Archmage of Earthsea, but also realizes how circumscribed his vast power must be.  Even at age 13, I realized that Le Guin was working at a whole different level than most SF authors.

In reading about her elsewhere, I discovered a remarkable thing – she has won more of the top awards in SF, the Hugo and Nebula for best novel, than any other author except Lois McMaster Bujold.   They’re both tied at 6.   Le Guin won the Hugo and Nebula for “The Left Hand of Darkness” in 1970, both again for “The Disposessed” in 1974, and Nebulas for “Tehanu: the Last Book of Earthsea” (1990) and “Powers” (2008).  Bujold won the Hugo for “The Vor Game” (1991), “Barrayar” (1992), “Mirror Dance” (1995) and “Paladin of Souls” (2004), which also won the Nebula.  She also won a Nebula for “Falling Free” in 1988.

Connie Willis then comes in at 5 best novel awards with 3 Hugos and 2 Nebulas, but she has won more total awards, 18, than anyone else.   Le Guin is tied for second with Harlan Ellison at 11.  Joe Haldeman and Robert Heinlein also have 5 Best Novel awards, although Heinlein’s were mainly before the Nebulas existed.

By this most basic measure, then, Le Guin, Bujold, and Willis are the best living SF writers.  Over the last 20 years, 8 of the Hugo Best Novel winners have been women, and 11 of the Nebula winners.   For a field mainly known for rockets, rayguns, and boldly going where no man has gone before, it has become quite egalitarian.

Posted in Uncategorized | Tagged | 2 Comments

Assange is Winning

A colleague pointed me to a good New York Times article last week, Why Samsung Abandoned Its Galaxy Note 7 Flagship Phone, on the epic disaster of its exploding phones:

Replacement phone for Abby Zuis of Farmington MN, click for story

Replacement phone for Abby Zuis of Farmington MN, click for story

After the initial reports that the lithium-ion batteries were catching fire, they replaced the battery only to find that the new design burned too.  They then said:

It did not help that the hundreds of Samsung testers trying to pinpoint the problem could not easily communicate with one another: Fearing lawsuits and subpoenas, Samsung told employees involved in the testing to keep communications about the tests offline — meaning no emails were allowed, according to the person briefed on the process.

This is just what Julian Assange was hoping for in his 2006 Wikileaks Manifesto:

The more secretive or unjust an organization is, the more leaks induce fear and paranoia in its leadership and planning coterie. This must result in minimization of efficient internal communications mechanisms (an increase in cognitive “secrecy tax”) and consequent system-wide cognitive decline resulting in decreased ability to hold onto power as the environment demands adaption.

Once people know that their words will be used against them, they’ll stop saying useful things.   He wanted to drive people to the Lomnasey Rule – “Never write if you can speak; never speak if you can nod; never nod if you can wink.”  He tried to damage systems that he considered conspiracies, such as the US occupation of Iraq and now the Hillary Clinton campaign.

The managers at Samsung may be remembering what stolen emails did to climate researcher Michael Mann.   A single line in an email from his colleague Phil Jones was enough to get him called out on the floor of the Senate and nearly fired.   The line from Jones was:

I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) and from 1961 for Keith [Briffa]’s, to hide the decline.

It was interpreted to mean that climate scientists were hiding the actual decline in global temperatures and therefore engaged in a worldwide conspiracy.   The scientific community fought back hard, but the controversy helped derail the Copenhagen Summit on climate change in 2009.   It wasn’t until the Paris Summit in 2015 that world action on climate change really began.  In those six years CO2 rose 20 ppm, about 5%.   Thanks, anonymous hacker.

I see this in my own work with respect to patents.  We are told never to discuss patent claims, validity, or prior art in email, for fear that it’ll be subpoenaed in an infringement lawsuit.   This obviously makes it difficult to get them right.

Farhad Manjoo thinks that the answer is to avoid email.  It’s so distributed that it’s inherently insecure.   Go with a central repository with encryption, like Signal from Open Whisper.  But email’s ease and accessibility is what makes fast, casual communication useful, and repositories can be hacked or subpoenaed like anything else.  You could, of course, abandon electronics all together, as Vladimir Putin apparently does.  Yet few (but some!) would hold up his regime as a model of good management.

The real answer for companies like Samsung is to adopt the engineering strategies of those who really do care about safety, like the designers of chemical plants, cars, and airplanes.  There’s a whole discipline now called Functional Safety which has standards for the design process itself and the certification of each component in a design.  The automotive version is an international standard called ISO 26262, and is influencing a steadily wider range of products.   It stresses that safety checking must be planned for from the beginning, and kept separate from the usual chain of command to avoid trade-offs between schedule and safety.   More and more tools are coming out to support it, like the DOORS and Jama specification systems.   These have a formal review process for all specs, can track all changes, and let one put links from requirements to design features.  When requirements change, as they constantly do, one can then see which features need to be fixed and which tests updated.

It’s a lot of work, and that’s why consumer product companies don’t do it.  Silicon Valley in general dislikes it, since it’s slow and rewards doggedness instead of ingenuity.  Yet now that their products are having major safety issues (looking at you, self-driving cars), it’s going to be critical.

Posted in Uncategorized | Tagged | Leave a comment

Best Demos Ever

Just this week a chip that I’ve been working on for the last couple of years finally came alive.   My colleagues put it on a board, loaded up the software, and it ran!   This was what the scene looked like:

[From “Young Frankenstein”.  It was so sad to lose Gene Wilder this week.]

Well, something like that.  It did flash some LEDs and print out some status messages as we huddled around it in a cubicle.  That meant that tens of millions of transistors had to work correctly, as did tens of thousands of lines of code.  It’s amazing that this stuff works at all, much less in the first days after power-up, but I have to admit that it wasn’t that spectacular.

So that got me thinking about demos that really did impress.  What were the some demos that really did blow people’s socks off?  Here are a few from oldest to newest:

First Display of the Telephone

Bell model of 1876

Bell model of 1876

Alexander Graham Bell achieved the first transmission of speech in March 1876, but couldn’t interest anyone in it.  He offered it to Western Union, who thought it was a toy. But his fiancé, Mabel Hubbard, knew what to do.  In June 1876 she secretly bought him a train ticket from Boston to the Centennial Exhibition in Philadelphia.   This was a massive world’s fair, showcasing the latest and greatest in technology.   The typewriter!  The mechanical calculator!  A slice of cable from the Brooklyn Bridge and the arm of the Statue of Liberty!

Mabel packed his bag and took him to the train station without telling him where he was going.  When she handed him the ticket, he protested, but she had already turned away.  Since she was completely deaf (they had met when he was teaching at a Boston school for the deaf), his complaints were of no avail.

He went to Philadelphia and stretched a wire from one end of the Education Hall to the other.  He again got no interest, until the last day of judging.   The emperor of Brazil, Don Pedro, was on the awards committee and recognized Bell from a tour of schools for the deaf.  He asked Bell what he doing at this machinery exhibition. Bell handed him a receiver and then went to the other end of the hall.  He recited Hamlet into the transmitter.  “My God, it speaks!” exclaimed the emperor.   That’s the reaction you want from demo visitors!

The telephone became the hit of the Exhibition, and Bell was on his way.  Thirteen years later he retired to his own supervillain lair, Beinn Bhreagh in Cape Breton, Nova Scotia to continue his astounding (but sadly not fiendish) experiments.

First Serious Light Bulb

Recreation of Edison's Lab in Menlo Park

Recreation of Edison’s Lab in Menlo Park

On New Year’s Eve 1879, Thomas Edison invited the public and press to Menlo Park, New Jersey to see his newly developed light bulb.   He and his crew had gotten it working three months earlier, but had only shown it to his investors and select journalists.   This was the big reveal.  He had 25 bulbs strung around the lab buildings and the street outside.  A huge crowd came out in the cold and snow.  Inside the lab itself, he showed a bulb burning while dunked in a glass jar of water.  You can’t do that with a candle, or a kerosene lantern, or a gas mantle, or a carbon arc lamp, or any other form of lighting in the previous history of the world.

The story was, well, electrifying.  It was on front pages around the world.  It was such an iconic moment, such a dividing line in history, that it has provoked a prolonged backlash against Edison.  People say that he wasn’t actually first, or that he stole other people’s ideas, or that he succeeded by brute-force trial and error.  Yet his bulbs were obviously better than others: they lasted longer because of better vacuums, were cheaper to install because they ran at a high voltage and so needed less copper, and were safer because he also invented fuses, switches, and circuit breakers.   He didn’t invent just a glowing wire; he created the entire system that made incandescent lighting practical.

His pride in it ultimately led to his downfall.  He got so wrapped up in defending his patents, particularly against George Westinghouse, that he got kicked out of his own company.  J. P. Morgan had invested in him to create the Edison General Electric company, but was disgusted by his distraction during the patent wars.  He replaced Edison as CEO with one Charles Coffin, who immediately renamed it General Electric, as it’s known to this day.

That story is nicely told in a new roman à clef, “Last Days of Night” by Graham Moore:

Click for website

Click for website

It follows Westinghouse’s young lawyer, Paul Cravath, in his attempt to beat off the wily and sinister Edison in the late 1880s.  It’s a ripping yarn, but kind of hard on the guy who genuinely was the Wizard of Menlo Park.

The Mother of All Demos

In 1968 Douglas Englebart presented a demo at an ACM meeting that prefigured most of computing for the next fifty years.  At that time computing was done with punch cards fed into big central machines.   Englebart showed:

  • A graphical user interface
  • Controlled by a hand pointer that he called a mouse
  • That connected one computer to another
  • With links that you could click on
  • Displaying text that you could edit
  • With files that recorded changes for undoing mistakes

This video is long, but shows more innovation per minute than anything I’ve ever seen.   I’m typing on the consequence of this work right now.   The ultimate value of this is literally in the trillions of dollars.

Steve Jobs Sees Smalltalk on the Xerox Alto

Xerox Alto workstation, 1973

In 1979 Apple was already working on a big upgrade from the Apple II – the Lisa.  This would have a mouse, windows, and a graphics display.  Jobs knew that Xerox was way ahead of them in this style of interface.  They had built a machine like this, the Alto, years earlier.   He wanted to get Xerox’s tech, and get them to invest.   The execs at Xerox PARC knew that they could never build a machine as cheaply as the small, fast PC companies could, and hoped to collaborate.  They ordered the engineers to show the Apple crew everything.

Adele Goldberg, one of the authors of the graphics package Smalltalk, was furious about this.   Xerox was giving away the store before it even knew what it had.   She had to be directly ordered to give the demo.  She and a co-author, Daniel Ingalls, showed off the huge range of things Smalltalk could do: overlapping windows, education applications, an animation editor.  Michael Hiltzik describes what happened next in his book “Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age” (1999):

At one point Jobs, watching some text scroll up the screen line by line in its normal fashion, remarked, “It would be nice if it moved smoothly, pixel by pixel, like paper.”

With Ingalls at the keyboard, that was like asking a New Orleans Jazz band to play “Limehouse Blues”. He clicked the mouse on a window displaying several lines of Smalltalk code, made a minor edit, and returned to the text.  Presto!  The scrolling was now continuous.

The Apple engineers’ eyes bulged in astonishment.  In any other system the programmer would have had to rewrite the code and recompile a huge block of the program, maybe even all of it.  The process might take hours.  Thanks to its object-oriented modularity, in Smalltalk such a modest changed never required the recompiling of more than ten or twenty lines, which could be done in second or two.

Jobs returned to Apple in a fury.   He had finally seen a system that would let his visions become instant reality.  The plans for Lisa were torn up, and the Macintosh was born.  Xerox got nothing out of it, except an eternal reputation as suckers.  Oddly, Jobs missed an even more important aspect of the Alto – that hundreds of them were wired together with a novel system called Ethernet.  Maybe that was because he cared mainly for the look of things, and not how they worked under the hood.

But it’s that under the hood stuff that my current project is demonstrating.  It won’t make emperors exclaim, or be in headlines around the world, or set the course of IT, or be the founding innovation of the world’s most valuable company, but we’re pretty proud of it.

Posted in Uncategorized | Leave a comment

Airborne LIDAR: Cool or Creepy?

The Guardian recently published a fascinating story, Revealed: Cambodia’s vast medieval cities hidden beneath the jungle, about the discovery of huge city complexes near Angkor Wat.   The temples of Angkor are built of stone, but these structures are roadways or wooden walls and so are far more perishable.  The Khmer Empire collapsed around 1300 CE, so they’ve been lost under vegetation for some 700 years.   It’s impossible to see them while traipsing through the jungle, but a remarkable new technology, Airborne Laser Detection And Ranging, can see them from the air:


Archaeologically-significant topography around the central brick towers of Sambor Prei Kuk. Courtesy McElhanney. Click for site

The paper describing the work is here: “Airborne laser scanning as a method for exploring long-term socio-ecological dynamics in Cambodia”.  It says this is the biggest lidar survey ever for archaeology.

Airborne lidar was developed in the late 90s.  It works by mounting a laser and a highly sensitive infrared photodetector on a helicopter.   The laser shoots out 120,000 pulses per second, and the sensor times how long they take to echo from off of the ground.  The helicopter flies at about 800 m, so the pulses take about 5 microseconds to travel down and back, which is what limits the pulse rate.  The laser is scanned across the landscape by a mirror cycling at 200 Hz, and you end up getting at least 16 pulses per m².  The timing is accurate enough to distinguish delays down to about 15 cm of height resolution.  The laser’s position is tracked by GPS supplemented with inertial sensors. The result is a 3D map of a landscape.

Most of the pulses will bounce off the tops of trees, which aren’t that interesting.   Some fraction, though, will get through the leaves and bounce off the ground.  This lets one see the actual ground level through all the shrouding jungle.  They did the scan in March 2015, when the leaf cover was at a minimum. Here’s a cross-section of one strip of their data:

Pre-Angkorian towers of Sambor Prei Kuk among the trees. Courtesy McElhanney

Pre-Angkorian towers of Sambor Prei Kuk among the trees. Courtesy McElhanney

By filtering the pulses by height, one can find the ground elevation over square kilometers at a time.   This actually takes a lot of processing, since huge amounts of data get collected, so the systems come with GPUs.  It’s carefully compared to ground surveys, and visible light images taken at the same time as the scan.   Angkor Wat itself is the most important archaeological site in Southeast Asia, and so has been heavily surveyed, so it makes a good reference.

The project is called the Cambodian Archaeological Lidar Initiative, and is a joint project between institutes in Cambodia and France.  It’s led by Prof. Damien Evans of the University of Sydney.   They did an earlier, smaller run in 2012. The survey itself was done by a Canadian company, McElhanney, which has done this kind of geographic information work all over the world since 1910, largely for forestry and mining interests.  The lidar is a Leica ALS70 (now replaced by the ALS80, which has a faster peak pulse rate).   It’s generally used for city surveys.

Well, finding lost cities is pretty cool.  How can this be creepy?  Hmmm, what else is hidden under forest canopies?   Drug operations.  Whether it’s marijuana farms in the Pacific Northwest or submarine construction in Columbia:

30 m seized by the Columbian army in 2011

30 m sub seized by the Columbian army in 2011

It won’t be possible to hide things on the land any more.  Not only can lidar see through trees, it can detect a camouflaged tank from its shape.  Operations like the above will have to be done indoors or underground.

And airborne lidar is getting easier to do all the time.  A Lincoln Labs spinoff in Massachusetts, 3DEO, has developed a much more sophisticated sensor than this Leica one using DARPA money.   It can detect a single photon of infrared light using a circuit called an avalanche photo-diode, and measure the return time down to a nanosecond.  It can do this for 4K pixels at once, and so can scan at a much higher rate than the Leica.

The size and power of the systems are also constantly improving.  The Leica weighs 90 kg and takes 900W, and so needs to be on a real plane or helicopter.   Cut these by a factor of 20 and they can go on a drone.    The size reduction will be driven by use for automotive sensing, where lidar is a good complement to radar.

With cheap, fast scanning there are scads of other interesting things one could do.   You could scan all the farms in a state every week to learn how crops are growing.  The depth of the snow pack could be constantly measured over the entire Rockies, and the height of every lake.  You could scan all the streets of a city every morning to find out where obstructions were in order to tell self-driving cars to avoid them.   Count all the cars on a highway or in parking lots!  Count all the sheep on a moor, or elephants on the veldt, or Taliban in Afghanistan!

Everything everywhere can now be tracked.   Sound creepy?  Sure, but it can be cool too.

Posted in Uncategorized | Tagged | Leave a comment