Detecting earthquakes is an important step toward warning surrounding communities that damaging seismic waves may be headed their way. A new model detects tremors and provides clues to their epicenter.

What’s new: S. Mostafa Mousavi and colleagues at Stanford and Georgia Institute of Technology built EQTransformer to both spot quakes and measure characteristics that help seismologists determine where they originated.

Key insight: Language models based on transformer networks use self-attention to track the most important associations among tokens, such as words, in a sentence. The authors applied self-attention to seismic waves globally to track the most important associations among their features. Since clues to a quake’s epicenter appear in portions of the waveform, they also used self-attention locally to find patterns over shorter periods of time.

How it works: The authors passed seismic waves through an encoder that fed three decoders designed to detect earthquakes and spot two types of location signal. The authors trained and tested the system using the Stanford Earthquake Dataset (STEAD), which contains over one million earthquake and non-earthquake seismographs. They augmented the data by adding noise, adding earthquake signals to non-quake waves, and shifting quake start times.

  • Self-attention requires a great deal more computation as the input’s size grows, so the encoder, which comprised convolutional and LSTM layers, compressed the input into a high-level representation. A pair of transformer layers were included to focus on earthquake signals.
  • In the detection decoder, convolutional layers determined whether an earthquake was occurring.
  • The other two decoders tracked the arrival of p-waves (primary waves that push and pull the ground) and s-waves (secondary waves that move the ground up and down or side to side). The difference in these arrival times indicates distance from a quake’s epicenter. These decoders used LSTM and local self-attention layers to examine small windows of time, which fed convolutional layers that detected the signals.

Results: EQTransformer outperformed state-of-the-art models in both detecting earthquakes and tracking p- and s-waves. In detection, EQTransformer achieved an F1 score of 1.0, a 2 percent improvement over the previous state of the art. In tracking p-waves, it improved mean absolute error over the earlier state of the art in that task from 0.07 to 0.01. With s-waves, it improved mean absolute error from .09 to .01. The training dataset didn’t include seismographs from Japan, so the authors tested their model’s ability to generalize on aftershocks from a Japanese quake that occurred in 2000. In this test, EQTransformer’s ability to spot the arrival of p-waves varied from human performance by an average .06 seconds, while its ability to spot the arrival of s-waves varied from human performance by an average .05 seconds.

Why it matters: Applied at both global and local scales, self-attention could be useful in tasks as diverse as forecasting weather, product demand, and power consumption.

We’re thinking: We applaud this earth shattering research!

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox