This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Crustal strain rates are fundamentally important quantities for assessing seismic hazard, since knowing where and how quickly strain is accumulating gives insight into where we can expect stored elastic energy to be released seismically. It is then important to develop and improve upon methods for mapping strain in tectonically active regions because such maps could conceivably feed into seismic hazard models such as UCERF3 \citep{Field2014}.
Crustal strain rates are fundamentally important quantities for assessing seismic hazard. For one reason, knowing where and how quickly strain is accumulating gives insight into where we can expect stored elastic energy to be released seismically. Secular crustal strain rates, based on Geologic and GNSS data, have been used to constrain seismic hazard models such as UCERF3 \citep{Field2014}. Dense networks of continuous GNSS stations, such as the Plate Boundary Observatory (PBO), have sufficient spatial and temporal coverage for us to feasibly make estimates of transient crustal strain rates, which is considered to be any deviation from the secular strain rate. Transient crustal strain rates can potentially illuminate geophysical signal, which may not be immediately apparent from inspecting the GNSS displacement data. This too is relevant for assessing seismic hazard because there are several instances of major earthquakes being triggered by transient geophysical phenomena, such as slow slip events \citep{Roeloffs2006} and postseismic deformation \citep{Freed2001}. For these reasons, it is important to develop and improve upon methods for estimating crustal strain rates from GNSS data.
Methods for estimating strain from geodetic data fall in one of two categories. There are model-based approaches which assume that strain is the result of loading on faults which have a known geometry, and there are data-based approaches which make no assumptions about the source of deformation. We will exclusively consider data-based approaches in this paper. The classic and simplest method for estimating strain is to assume that the strain rate is constant in time and spatially uniform within subnetworks of the geodetic data. Linear least squares is then used to find the components of the strain rate tensor for each subnetwork \citep[e.g][]{Frank1966,Prescott1976,Savage1986,Feigl1990,Murray2000}. Several algorithms have been developed to improve upon this procedure. \citet{Shen1996} and \citet{Shen2015} discuss an algorithm where, instead of using the immediately adjacent stations to calculate strain at a position, the strain is computed with a weighted average over the entire network where the weighting is smaller for more distant stations. Another strategy is to fit a set of interpolating basis functions to the deformation field and then compute the strain from the analytical derivative of the interpolant \citep[e.g.][]{Beavan2001,Tape2009,Sandwell2016}.
Most methods for estimating crustal strain rates, both secular and time-dependent, from GNSS data assume some parametric form of the deformation signal, and then strain rates are the spatial derivative of the best fitting model. The simplest and traditional method for estimating secular crustal strain rates assumes that GNSS derived velocities can be described with a first order polynomial (i.e., having a constant spatial gradient) over some subnetwork of the GNSS stations \citep[e.g][]{Frank1966,Prescott1976,Savage1986,Feigl1990,Murray2000}. Linear least squares is then used to find the components of the strain rate tensor for each subnetwork. The uniform spatial gradient assumption is obviously not appropriate when subnetworks span a large area, although it is not clear when a subnetwork is too large. To help overcome this deficiency, \citet{Shen1996} and \citet{Shen2015} used an inverse distance weighting scheme where the estimated strain rate at some point is primarily controlled by observations at nearby stations. However, in formulating the algorithms from \citet{Shen1996} and \citet{Shen2015}, it is still assumed that the underlying displacement gradients is uniform over the entire network. The errors in this assumption manifests itself as implausibly low formal uncertainties for the estimated strain rates. Other data-based methods for estimating secular strain rates have parameterized velocities with bicubic splines \citep{Beavan2001}, spherical wavelets \citep{Tape2009}, and elastostatic Green's functions \citep{Sandwell2016}. The choice of parameterization can be subjective. If there are too few degrees of freedom in the parameterization then estimated strain rates will be biased and the uncertainties will be underestimated. If there are too many degrees of freedom then there will not be any coherent features in the estimated strain rates. One could parameterize deformation with a physically motivated model of interseismic deformation \citep[e.g.,][]{Meade2005,McCaffrey2007}. In such models the lithospheric rheology and plate interfaces are assumed to be known. Any errors in the assumed physical model could similarly result in biased strain estimates and underestimated formal uncertainties.
The aforementioned studies have all been concerned with estimating secular strain rates. In recent years the Southern California Earthquake Center (SCEC) community has shown interest in developing methods for detecting transient strain. SCEC supported a transient detection exercise, where several research groups tested their methods for detecting transient geophysical signals with a synthetic GNSS dataset. Among the methods tested were the Network Strain Filter (NSF) \citep{Ohtani2010} and the Network Inversion Filter (NIF) \citep{Segall1997}. the NSF parameterizes the geophysical signal spatially with wavelets and the NIF, which is intended for detecting slow fault slip, uses the elastic dislocation Green's functions from \citet{Okada1992}. For the NSF and NIF, the time dependence of the geophysical signal is modeled as integrated Brownian motion. The method described in \citet{Holt2013} was also tested in the transient detection exercise, and they calculate strain rates using a bicubic spatial parameterization of displacements between time epochs. \citet{Holt2013} defined a detection threshold based on the strain rate magnitude, and we demonstrate that this is indeed an effective criterion for identifying geophysical signal. One main challenge with using these methods for detecting transient geophysical signal is that it is difficult to distinguish signal from noise, since the noise on the inferred deformation can be significantly influenced by the choice of parameterization.
Each of the aforementioned methods have assumed a described above
Each of these methods use a spatial parameterization for the geophysical signal. If the and deformation spatial use a parametric model
determine strain rates by spatially parameterizing the and they described a method for detecting transient geophysical signal based on the strain rate magnitude, which were used in this exercise, and their approach fidentified transient strain events by fitting a set of spatial wavelet basis functions to the deformation field at discrete time epochs, and a Kalman filtering strategy was used to ensure that the coefficients for each basis function varied smoothly in time. \citet{Holt2013} calculated time dependent strain by differentiating a bicubic interpolant that was fit to each epoch of a temporally smoothed deformation field.
physical models a physical model that based on an assumption of the earths rheology and the location of plate interfaces, Another approach Model-based estimates of secular strain rate find
Another strategy is to fit a set of interpolating basis functions to the deformation field and then compute the strain from the analytical derivative of the interpolant \citep[e.g.][]{Beavan2001,Tape2009,Sandwell2016}.
Methods for estimating strain from geodetic data fall in one of two categories. There are model-based approaches which assume that strain is the result of loading on faults which have a known geometry, and there are data-based approaches which make no assumptions about the source of deformation. We will exclusively consider data-based approaches in this paper. The classic and simplest method for estimating strain is to assume that the strain rate is constant in time and spatially uniform within subnetworks of the geodetic data. Linear least squares is then used to find the components of the strain rate tensor for each subnetwork . Several algorithms have been developed to improve upon this procedure. \citet{Shen1996} and \citet{Shen2015} discuss an algorithm where, instead of using the immediately adjacent stations to calculate strain at a position, the strain is computed with a weighted average over the entire network where the weighting is smaller for more distant stations.
The aforementioned studies have all been concerned with estimating long term strain rates. Time dependent strain would be useful for studying geophysical processes which occur over timescales of days to years such as slow slip events, postseismic relaxation, or volcanic deformation. \citet{Ohtani2010} identified transient strain events by fitting a set of spatial wavelet basis functions to the deformation field at discrete time epochs, and a Kalman filtering strategy was used to ensure that the coefficients for each basis function varied smoothly in time. \citet{Holt2013} calculated time dependent strain by differentiating a bicubic interpolant that was fit to each epoch of a temporally smoothed deformation field.
We begin this paper by summarizing the RBF-FD scheme and explaining how we construct differentiation matrices for scattered data. We then introduce the RBF-FD filter, which is used to smooth the observed geodetic data prior to differentiation. We provide two real world demonstraitions of our method for calculating strain rates. First we calculate the long term strain rates in Southern California from the CMM3 velocity data set \citep{Shen2011}, and we verify that our results are consistent with other studies. We then calculate time dependent strain rates in Cascadia from the GPS data provided by UNAVCO. In Cascadia, we analyze strain resulting from slow slip events and compare it to the long term tectonic strain accumulation. Slow slip events are found to produce compression in the Olympic Peninsula, which is in addition to the compression resulting from tectonic loading. Further south in Oregon, the slow slip events tend to release the compressional strain that is accumulated tectonically. While similar conclusions have been drawn from fault slip inversions for slow slip events, it is important to recognize that slip inversion are the product of inverting an ill-conditioned matrix making it difficult to determine whether slip inferences are real or just an artifact of the inversion. The strain rates presented in this paper are more direct observations and can be interpretted with a higher degree of confidence.
The Southern California Earthquake Center (SCEC) community has recently shown interest in developing methods for detecting transient signals. SCEC supported a transient detection exercise, where several research groups tested their transient detection methods with a synthetic GNSS dataset. supported the transient Our motivation for detecting SSEs is, in part, because there have been several instances where SSEs immediately preceded megathrust earthquakes \citep{Roeloffs2006}.
GPR is a bayesian approach which does not rely on dubious parametric assumptions.
Realizations have annual periodicity and the roughness is controlled by $\theta$. Decreasing $\theta$ has the same effect as including higher frequency sinusoids in the seasonal model. The optimal value for $\theta$ can be found with the REML method as described in Section \ref{sec:NoiseModel}.
Numerical Notes/scaling. Use partitioned matrix and Cholmod to solve for strain.
\section{Conclusion}\label{sec:Conclusion}
In this paper we describe how Gaussian processes can be used to identifying transient strain from GNSS data. Our approach is Bayesian and we use a spatio-temporal stochastic model to describe the prior for transient deformation and the noise in GNSS data. Our estimated transient strain rates are complete with meaningful estimates of uncertainties, which allows us to objectively discern geophysical signal from noise. Other commonly used methods for estimate long-term and transient strain assume a parametric form for the underlying signal, and errors in this assumption will result in a biased solution. In contrast our stochastic prior model is chosen with maximum likelihood methods, which is an objective and data-driven approach. The lack of subjectivity in our method gives us confidence that the results are unbiased.