Home  /  Workshops  /  SCEC Workshop: Collaboratory for the Study of Earthquake Predictability (CSEP) Annual Workshop

SCEC Workshop: Collaboratory for the Study of Earthquake Predictability (CSEP) Annual Workshop

Conveners: Max Werner, Toño Bayona, and Bill Savran
Dates: September 10, 2022 09:00 - 17:00
Location: Palm Springs Hilton
SCEC Award and Report: 22159

SUMMARY: For the first time since the pandemic, CSEP hosted an in-person workshop at the SCEC Annual Meeting. The 2022 CSEP workshop focused on four themes, each developed in a session: reviewing recent and ongoing CSEP activities around the globe; earthquake forecasting with machine learning; Operational Earthquake Forecasting (OEF) around the globe; and developing plans for the coming year. The first session featured a primer on CSEP, with an overview of current capabilities, such as the community-based open source Python toolkit pyCSEP, the new design of floating experiments and open testing centers, as well as talks that highlighted recent CSEP evaluations around the globe. The discussion gathered feedback on current CSEP activities and developed priorities for the future. The second session focused on machine learning techniques for earthquake forecasting, and the development of benchmark exercises to compare these methods against traditional models. The third session comprised updates on OEF from the US and New Zealand, while the fourth session involved break-out group work that developed CSEP plans around the workshop’s themes.     

Session 1: Overview of Recent and Ongoing CSEP activities

Bill Savran: Primer on CSEP and Current Capabilities (video)

Bill provided a primer on CSEP, including who, what, how and why. He described the history of CSEP, the testing centers and achievements over the first ten-year phase. Learning from this first phase, he then described the benefits for the current modernized version of CSEP, including community-based software development and the open earthquake experiment format. The developer community is actively contributing from over ten different institutions and so-called reproducibility packages provide all code, data, forecasts and anything else needed to fully reproduce the results of an experiment. He then provided some highlights of publications that used pyCSEP.

Questions concerned the role of the controlled environment and whether hypotheses could be fully specified in a floating experiment.

Toño Bayona: Are regional earthquake forecasts more informative than global models? Insights from California, Italy and New Zealand (video)

Regional models can use local or regional datasets to develop regionally calibrated forecasts, but global models offer greater testability against more large earthquakes, motivating a quantitative comparison of global and regional models within CSEP testing regions. Toño described a prospective test of multiple regional and global time-independent forecasts in the regions of California, Italy and New Zealand. GEAR1 is competitive in all regions, but is outperformed by adaptive smoothing models in California and Italy, while it outperforms all models in NZ, including the New Zealand National Seismic Hazard model (NZSHM).

Questions concerned the small sample size of target earthquakes and the potential of Bayesian methods; the performance of the updated NZSHM, and the potential for nested models.

Pablo Iturrieta: CSEP Floating experiments in Italy and New Zealand (video)

Pablo presented more details about the concept and design of floating experiments. He then presented results from Italian forecasting experiments, by using a range of different scores, and categorizing forecasts according to their implicit hypotheses visualized as a spiral. Next, he described the new Italian time-dependent experiments, as well as tests of the 2013 European Seismic hazard model (ESHM13), which comprises three source models components, including area sources, fault background sources and smoothed seismicity. His results suggest that the 2020 map, which was tested pseudo-prospectively, performed better than the 2013 map, meaning the forecast skill of the updated model should be better than the old model. He also presented preliminary results of tests of the NZ hazard source model.

Questions were raised about how to reconcile the apparently different results from California (where smoothing short-term high quality catalogs led to better performance) and Italy (where smoothing long-term catalogs performed better).

Session 2: Earthquake Forecasting with Machine Learning

Sam Stockman: Neural point processes and the Amatrice sequence (video)

Sam started with an introduction to neural point processes, as a flexible alternative to traditional seismicity point process models like ETAS. He presented synthetic tests to show that the neural network can learn the ETAS intensity function and fit unseen ETAS simulations. One advantage is the much faster calibration of neural point processes over ETAS. He then presented experiments with the Central Apennines sequence, which showed an improved performance of the neural network over ETAS. He interpreted much of the improved performance as a result of better modeling by the neural network to model incomplete (non Gutenberg-Richter) data. He ended with discussion points, namely (i) that machine-learning (ML) models exist in the same framework as current models, (ii) the need for accessible benchmark models, (iii) the need for communicating forecasting goals.

Questions concerned particular characteristics of benchmark datasets; the role of the spatial extent of the region; and the ability to simulate from the ML model.

Kelian Dascher-Cousineau: Neural network-based Temporal Earthquake Forecasts (video)

Kelian presented a similar neural model suite for forecasting. One conclusion is that the proposed model RECAST is outperforming ETAS once the catalog history becomes very large. He used multiple catalogs to demonstrate this performance: San Jacinto, SCEDC, and QTM, demonstrating some robustness. He also presented results of 14-day forecast intervals, which uses sampling of the Weibull mixture distribution. RECAST more frequently performs better than ETAS in these tests. RECAST still generates some forecasts that capture the behavior of the Ridgecrest sequence.

Questions surrounded using RECAST as a benchmark; creating Kaggle competitions; the potential for prospective testing; and testing on catalogs of past periods in which new earthquakes will be present due to ML methods detecting more events. It was proposed to introduce a seismicity dataset into standard ML benchmark datasets on which new models are evaluated.

Session 3: Operational Earthquake Forecasting around the Globe

Nicholas van der Elst: Prospective testing of the USGS public aftershock forecasts following the M6.4 2020 SW Puerto Rico Earthquake 

Nicholas described the USGS operational forecasts issued during the Puerto Rico earthquake sequence, starting from Reasenberg-Jones (RJ) forecasts that were updated and later superseded by ETAS forecasts. The tweaks to ETAS included a Bayesian prior, derived from past global sequences, a distinct productivity for primary and secondary aftershocks; a Omori c-value that scales with magnitude; a preclusion of supercritical parameters. He described some challenges, namely magnitude irregularities. The numbers of the RJ forecasts were too low, but ETAS forecasts captured the observed numbers. Retrospective ETAS tests showed interesting deviations early on. Another testing challenge involved the evolving catalog, which was used to generate the forecasts. ETAS was closer to targets (of 2.5% rejection rates) than RJ, but a full distribution is needed to perform better tests. Nicholas also considered the power of these tests when discrete numbers are small.

Gabe Paris: An Interactive Viewer to Improve Operational Aftershock Forecasts (video)

Gabe first described the current Reasenberg-Jones based Operational Aftershock Forecast (OAF) method of the USGS.  They then demonstrated the capabilities of an interactive OAF viewer, which enables a visualization of forecasts as well as a comparison between forecast (number) distributions and observed numbers. Their work was recently published in SRL.

Kenny Graham: Current state of New Zealand's OEF tool (video)

Kenny started by describing the OEF system, which is a hybrid forecast tool (HFT), which comprises 3 models: a short-term STEP & ETAS model; a medium term EEPAS model; and a long-term smoothed seismicity and faults/strain model (decades to centuries). Until recently, the OEF tool required human input. Now they use a Docker container to enable all individual models on a single computing environment. He then described the GUI of the HFT. Forecast horizons are 30, 90 and 365 days. Users are mostly interested in the medium and long-term time scales. Kenny next presented some example forecasts, including ground shaking. In a What’s Next session, he outlined next steps, including engagement with stakeholders, updating and parameterising the models, and revitalizing the testing center.

Questions involved testing of ground motion models; whether model source code was open-source; and the stakeholders of the OEF system.

Session 4: Developing Future Plans

The break-out groups organized around three themes:

  1. Next steps for pyCSEP: Dissemination, Training, Impact
    • A need was identified for a clear development road map
    • pyCSEP training workshops should be offered to OEF developers to understand capabilities and opportunities
    • New features should be driven by the user community
  2. Next steps for testing USGS and other OEF products
    • Operational doesn’t need to mean automated, although automation helps users understand the products and can avoid issues during an active sequence
    • FastETAS might be a candidate ready for CSEP testing
    • Catalog issues and incompleteness were identified as issues that might affect testing
    • Need for a common testing platform or environment in which OEF models from different regions can be compared
    • A useful feature for pyCSEP would be a reference/benchmark ETAS model
  3. Developing benchmark problems, datasets, models for machine-learning models
    • Participants identified a need for a hierarchy of benchmarks that would educate ML folks about the particularities of earthquake data
    • A Kaggle competition was proposed
    • Benchmarks should be very simple to lower the barrier for entry
    • A tutorial notebook should be published that shows what a benchmark model in seismology and ML look like and how to compare them
    • Participants agreed to further meetings to follow up on developing ML benchmarking models

Final thoughts

The workshop brought together members from the SCEC, USGS, machine learning and global CSEP community to discuss a number of important topics in earthquake forecasting and model evaluation, including (1) the many recent and ongoing global CSEP activities, including the pyCSEP toolkit, new open science practices, and new testing methods, (2) the potential of machine-learning methods for earthquake forecasting and benchmark problems to facilitate cross-comparisons with traditional methods, (3) the current status of OEF around the globe and (4) priorities and plans for the next year. The CSEP workshop at SCEC forms the focal point of the global CSEP collaboration and is a crucial venue for exchanging ideas and making progress towards the CSEP2 goals.

Presentation videos may be viewed by clicking the links below. PLEASE NOTE: Files are the author’s property. They may contain unpublished or preliminary information and should only be used while viewing the talk. Only the presentations for which SCEC has received permission to post publicly are included below.

 

SATURDAY, SEPTEMBER 10, 2022

09:00 - 09:15 Workshop Check-In  
09:15 - 09:30 Welcome and Overview of Workshop Objectives, Introductions (video) Max Werner
  Session 1: Overview of Recent and Ongoing CSEP activities
Moderator: Bill Savran
09:30 - 10:00 Primer on CSEP and Current Capabilities (video) Bill Savran and Max Werner
10:00 - 10:20 Are Regionally Calibrated Seismicity Models more Informative than Global Models? Insights from California, New Zealand, and Italy (video) Toño Bayona (remote)
10:20 - 10:40 Earthquake Forecast Testing Experiments in Italy: Results, Lessons and Prospects (video) Pablo Iturrieta
10:40 - 11:00
Discussion
  • Community feedback on CSEP activities and plans
  • Priorities for the “USGS bridge” period and a possible SCEC6
All
11:00 - 11:30 Break  
  Session 2: Earthquake Forecasting with Machine Learning Moderator: Morgan Page
11:30 - 11:50 Neural Point Processes and the 2016 Central Apennines Sequence (video) Sam Stockman
11:50 - 12:10 Neural network-based Temporal Earthquake Forecasts (video) Kelian Dascher-Cousineau
12:10 - 13:00
Discussion: 
  • Community benchmark problems, datasets and forecasts
  • A new ML-oriented community competition with CSEP benchmarks? 
All
13:00 - 14:00 Lunch  
  Session 3: Operational Earthquake Forecasting around the Globe Moderator: Philip Maechling
14:00 - 14:20 Prospective testing of the USGS public aftershock forecasts following the M6.4 2020 SW Puerto Rico Earthquake Nicholas van der Elst
14:20 - 14:40 An Interactive Viewer to Improve Operational Aftershock Forecasts (video) Gabrielle Paris
14:40 - 15:00 Current State of New Zealand’s Operational Earthquake Forecasting (video) Kenny Graham
15:00 - 15:30
Discussion: 
  • How can CSEP best support OEF efforts in the US and globally?
  • Identifying OEF models ready for model evaluation 
All
15:30 - 16:00 Break  
16:00 - 17:00
Session 4: Developing Future Plans
  • Next steps for pyCSEP: development and training needs
  • Community tools for ML-based forecast development and testing: benchmark datasets and problems; new experiments 
  • Testing OEF models (e.g. UCERF)
  • Priorities and next steps for the SCEC bridge period and a possible SCEC6 
Moderator: Max Werner
 

It is SCEC policy to foster harassment-free environments wherever our science is conducted. By accepting an invitation to participate in a SCEC-supported event, by email or online registration, participants agree to abide by the SCEC Activities Code of Conduct.

PARTICIPANTS

*Remote Participants