Google and Harvard Made AI to Predict Earthquake Aftershocks
They used a database of more than 131,000 earthquakes.
Earthquake researchers have been trying to model aftershocks for years. By now, they’re able to fairly accurately predict when aftershocks are going to happen, and how large they’ll be. But they’ve been far from perfect when it comes to pinpointing their location.
The answer—of course—appears to be artificial intelligence. A team of collaborators from Google and Harvard team report that after training a neural network, the same kind of AI that powers Facebook photo tagging and Alexa’s voice transcription, on a database of more than 131,000 earthquakes and the locations of their subsequent aftershocks, they’ve come up with the best way yet to predict where future aftershocks will occur.
At its core, artificial intelligence is fancy pattern matching: Show it data, whether pictures of someone’s face or the locations of earthquake aftershocks, and the algorithm will try to find the underlying pattern. For facial recognition that pattern is the pixel arrangements that represent a person’s face, and in aftershock prediction its the equation that can be used to explain the reason for aftershock locations.
Researchers wrote in the paper, published in Nature Aug. 29, that one reason for the algorithm’s accuracy is its use of two complex metrics that had not previously been thought to be correlated with aftershocks, called maximum shear stress change and the von-Mises yield criterion. These metrics are commonly applied in the sciences of bendable materials like copper or aluminum, and not used in aftershock prediction—though that might now change.
But don’t get excited that this is going to start helping people tomorrow—as Harvard researcher Phoebe DeVries, coauthor of the paper, candidly told the BBC, “We’re quite far away from having this be useful in any operational sense at all. We view this as a very motivating first step.”