Artwork

Content provided by USGS, Menlo Park (Scott Haefner) and U.S. Geological Survey. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by USGS, Menlo Park (Scott Haefner) and U.S. Geological Survey or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Applying AI foundation models to continuous seismic waveforms

1:00:00
 
Share
 

Manage episode 473967295 series 1399341
Content provided by USGS, Menlo Park (Scott Haefner) and U.S. Geological Survey. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by USGS, Menlo Park (Scott Haefner) and U.S. Geological Survey or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Chris Johnson, Los Alamos National Lab

Significant progress has been made in probing the state of an earthquake fault by applying machine learning to continuous seismic waveforms. The breakthroughs were originally obtained from laboratory shear experiments and numerical simulations of fault shear, then successfully extended to slow-slipping faults. Applying these machine learning models typically require task-specific labeled data for training and tuning for experimental results or a region of interest, thus limiting the generalization and robustness when broadly applied. Foundation models diverge from labeled data training procedures and are widely used in natural language processing and computer vision. The primary different is these models learn a generalized representation of the data, thus allowing several downstream tasks performed in a unified framework. Here we apply the Wav2Vec 2.0 self-supervised framework for automatic speech recognition to continuous seismic signals emanating from a sequence of moderate magnitude earthquakes during the 2018 caldera collapse at the Kilauea volcano on the island of Hawai'i. We pre-train the Wav2Vec 2.0 model using caldera seismic waveforms and augment the model architecture to predict contemporaneous surface displacement during the caldera collapse sequence, a proxy for fault displacement. We find the model displacement predictions to be excellent. The model is adapted for near-future prediction information and found hints of prediction capability, but the results are not robust. The results demonstrate that earthquake faults emit seismic signatures in a similar manner to laboratory and numerical simulation faults, and artificial intelligence models developed for encoding audio of speech may have important applications in studying active fault zones.

  continue reading

20 episodes

Artwork
iconShare
 
Manage episode 473967295 series 1399341
Content provided by USGS, Menlo Park (Scott Haefner) and U.S. Geological Survey. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by USGS, Menlo Park (Scott Haefner) and U.S. Geological Survey or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Chris Johnson, Los Alamos National Lab

Significant progress has been made in probing the state of an earthquake fault by applying machine learning to continuous seismic waveforms. The breakthroughs were originally obtained from laboratory shear experiments and numerical simulations of fault shear, then successfully extended to slow-slipping faults. Applying these machine learning models typically require task-specific labeled data for training and tuning for experimental results or a region of interest, thus limiting the generalization and robustness when broadly applied. Foundation models diverge from labeled data training procedures and are widely used in natural language processing and computer vision. The primary different is these models learn a generalized representation of the data, thus allowing several downstream tasks performed in a unified framework. Here we apply the Wav2Vec 2.0 self-supervised framework for automatic speech recognition to continuous seismic signals emanating from a sequence of moderate magnitude earthquakes during the 2018 caldera collapse at the Kilauea volcano on the island of Hawai'i. We pre-train the Wav2Vec 2.0 model using caldera seismic waveforms and augment the model architecture to predict contemporaneous surface displacement during the caldera collapse sequence, a proxy for fault displacement. We find the model displacement predictions to be excellent. The model is adapted for near-future prediction information and found hints of prediction capability, but the results are not robust. The results demonstrate that earthquake faults emit seismic signatures in a similar manner to laboratory and numerical simulation faults, and artificial intelligence models developed for encoding audio of speech may have important applications in studying active fault zones.

  continue reading

20 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Listen to this show while you explore
Play