Work

Insights into Earthquake Processes and Hazards Using Statistical Methods

Public

To better prepare for earthquakes, we need to know how large they will be, how strong the shaking will be, and how often they will occur. To answer these questions, seismologists look to past earthquakes to better understand future hazards. Earthquakes, however, are complex physical phenomena that occur on timescales much larger than our instrumental records so our past observations may only be providing a partial picture of earthquake behavior. Therefore it is critical to know the limitations of past observations, assess the uncertainty inherent in our earthquake models, and ensure that our models reflect our understanding of the processes that generate earthquakes. In this thesis, I present novel implementations of various statistical methods to help answer these fundamental earthquake questions. Using probabilistic simulations, I show that the largest earthquakes in the eastern North America record may not be the largest possible earthquakes that can occur. In this region, the records are simply too short relative to the frequency of large earthquakes to exclude the possibility of larger earthquakes. However, using a similar probability approach, I demonstrate that observed global variations in earthquake magnitude by fault geometry likely do reflect real differences and are not an artifact of short catalog lengths. I show that continental normal fault earthquakes do have smaller maximum magnitudes than other fault geometries and propose that the smaller maximum magnitudes reflect the weakness of continental lithosphere in extension. In assessing potential earthquake hazards, we need to know not only how big earthquakes might be but also how strong the resulting shaking will be. Earthquake stress drop---the change in stress along a fault due to an earthquakes---is a commonly estimated earthquake parameter which is thought to control the amplitude of high-frequency shaking that damages buildings and structures. I show that two of the most commonly used methods can produce drastically different estimates for the same earthquakes. There is significant, unaccounted for uncertainty in these estimates. As a result stress drop trends that appear using one method are unobservable using the other method. Lastly, seismic hazard analysis requires knowing how often large earthquakes will occur. Current earthquake recurrence models make many simplifying assumptions that ignore the complexities of the processes that drive earthquakes. Here I present the analytical equations for the Long-Term Fault Memory based on Salditch et al.'s (2020) numerical design which produces earthquake probability estimates based on the specific sequence of past earthquakes. I derive the equations for two different versions of this model and apply them to real earthquake records. My analysis shows the specific earthquake sequence can significantly raise the estimated likelihood of an earthquake and may provide more accurate assessments of earthquake hazard. Together, these different analysis show how statistical models can be applied in a variety of ways to provide insight into fundamental earthquake questions.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items