|
|
Complexity & Robustness |
|
Pathogen Induced Tolerance pdf Using a dynamic model we study the adaptive immune response to a sequence of two infections. As expected, memory cells generated by the first (primary) infection generate rapid response when the secondary infection is identical (homologous). When the secondary infection is different (heterologous), the memory cells have a positive effect or no effect at all. In exceptional instances in nature the primary infection generates vulnerability to a heterologous infection. This model predicts ``Original Antigenic Sin'', but shows that average effector affinity is not an accurate measure of the quality of an immune response, and does not on its own provide a mechanism for vulnerability. In this paper we propose and mathematically study a novel mechanism ``Pathogen Induced Tolerance'' (PIT), where antigen from the primary infection takes part in the immune system tolerance mechanisms, prohibiting the creation of new naive cells specific for the secondary response. The action of this mechanism coincides with Original Antigenic Sin but is not caused by it. We discuss the model in the context of the dengue virus and show that PIT is better suited to describe observed phenomena than the proposed Antibody Dependant Enhancement (ADE).
|
|
Using
ICA and realistic BOLD models to obtain joint EEG/fMRI solutions to the problem of source localization pdf We develop two techniques to solve for the spatio-temporal neural activity patterns using Electroencephalogram (EEG) and Functional Magnetic Resonance Imaging (fMRI) data. EEG-only source localization is an inherently underconstrained problem, whereas fMRI by itself suffers from poor temporal resolution. Combining the two modalities transforms source localization into an overconstrained problem, and produces a solution with the high temporal resolution of EEG and the high spatial resolution of fMRI. Our first method uses fMRI to regularize the EEG solution, while our second method uses Independent Components Analysis (ICA) and realistic models of Blood Oxygen-Level Dependent (BOLD) signal to relate the EEG and fMRI data. The second method allows us to treat the fMRI and EEG data on equal footing by fitting simultaneously a solution to both data types. Both techniques avoid the need for ad hoc assumptions about the distribution of neural activity, although ultimately the second method provides more accurate inverse solutions.
|
|
Spatial modeling of fire in shrublands using HFire: I. Model description and event simulation pdf A raster based, spatially explicit model of surface fire spread called HFire is introduced. HFire uses the Rothermel fire spread equation to determine one dimensional fire spread, which is then fit to two dimensions using the solution to the fire containment problem and the empirical double ellipse formulation of Anderson. HFire borrows the idea of an adaptive time step from previous cell contact raster models and permits fire to spread into a cell from all neighboring cells over multiple time steps as is done in the heat accumulation approach. The model has been developed to support simulations of single fire events and long term fire regimes. The model implements equations for surface fire spread and is appropriate for use in grass or shrubland functional types. Model performance on a synthetic landscape, under controlled conditions was benchmarked using a standard set of tests developed initially to evaluate FARSITE. Additionally, simulations of two Southern California fires spreading through heterogeneous fuels, under realistic conditions showed similar performance between HFire and FARSITE, good agreement to historical reference data, and shorter model run times for HFire. HFire is available for download: http://firecenter.berkeley.edu/hfire.
|
|
Spatial modeling of fire in shrublands using HFire: II. Fire regime sensitivities pdf Despite inherent difficulties, long-term simulation modeling is one of the few approaches available for understanding fire regime sensitivities to different environmental factors. This paper is the second in a series that documents a new raster-based model of fire growth, HFire, which incorporates the physical principles of fire spread (Rothermal, 1972) and is also capable of extended (e.g., multi-century) simulations of repeated wildfires and vegetation recovery. Here we give a basic description of long-term HFire implementation for a shrubland-dominated landscape in southern California, a study area surrounded by urban development and prone to large, intense wildfires. We examined fire regime sensitivities to different input parameters, namely ignition frequency, fire suppression effectiveness (as measured by a stopping rule based on fire rate of spread), and extreme fire weather event frequency. Modeled outputs consisted of 500-yr series of spatially explicit fire patterns, and we analyzed changes in fire size distributions, landscape patterns, and several other descriptive measures to characterize a fire regime (e.g., fire cycle and rates of ignition success). Our findings, which are generally consistent with other analyses of fire regime dynamics, include a relative insensitivity to ignition rates and a strong influence of extreme fire weather events. Although there are several key areas for improvement, HFire is capable of efficiently simulating realistic fire regimes over very long time scales, allowing for physically based investigations of fire regime dynamics in the future.
|
|
Robustness and Fragility in
Immunosenescence
Small pdf Large pdf
Stromberg SP, Carlson J (2006) PLoS Comput Biol 2(11): e160. doi:10.1371/journal.pcbi.0020160 We construct a model to study tradeoffs associated with aging in the adaptive immune system, focusing on cumulative effects of replacing naive cells with memory cells. Binding affinities are characterized by a stochastic shape space model. Loss is measured in terms of total antigen population over the course of an infection. We monitor evolution of cell populations on the shape space over a string of infections, and find that the distribution of losses becomes increasingly heavy tailed with time. Initially this lowers the average loss: the memory cell population becomes tuned to the history of past exposures, reducing the loss of the system when subjected to a second, similar infection. This is accompanied by a corresponding increase in vulnerability to novel infections, which ultimately causes the expected loss to increase due to overspecialization, leading to increasing fragility with age (i.e. immunosenescence). In our model, immunosenescence is not the result of a performance degradation of some specific lymphocyte, but rather a natural consequence of the built in mechanisms for system adaptation. This "robust, yet fragile" behavior is a key signature of Highly Optimized Tolerance (HOT).
|
|
Evolutionary Dynamics and Highly Optimized Tolerance pdf We develop a numerical model of a lattice community based on Highly Optimized Tolerance (HOT), which relates the evolution of complexity to robustness tradeoffs in an uncertain environment. With the model, we explore scenarios for evolution and extinction which are abstractions of processes which are commonly discussed in biological and ecological case studies. These include the effects of different habitats on the phenotypic traits of the organisms, the effects of different mutation rates on adaptation, fitness, and diversity, and competition between generalists and specialists. The model exhibits a wide variety of microevolutionary and macroevolutionary phenomena which can arise in organisms which are subject to random mutation, and selection based on fitness evaluated in a specific environment. Generalists arise in uniform habitats, where different disturbances occur with equal frequency, while specialists arise when the relative frequency of different disturbances is skewed. Fast mutators are seen to play a primary role in adaptation, while slow mutators preserve well-adapted configurations. When uniform and skewed habitats are coupled through migration of the organisms, we observe a primitive form of punctuated equilibrium. Rare events in the skewed habitat lead to extinction of the specialists, whereupon generalists invade from the uniform habitat, adapt to their new surroundings, ultimately leading their progeny to become vulnerable to extinction in a subsequent rare disturbance.
|
|
Wildfires, Complexity and Highly Optimized Tolerance pdf Recent, large fires in the western United States have rekindled debates about fire management and the role of natural fire regimes in the resilience of terrestrial ecosystems. This real-world experience parallels debates involving abstract models of forest fires, a central metaphor in complex systems theory. Both real and modeled fire-prone landscapes exhibit roughly power law statistics in fire size versus frequency. Here, we examine historical fire catalogs and a detailed fire simulation model; both are in agreement with a highly optimized tolerance model. Highly optimized tolerance suggests robustness tradeoffs underlie resilience in different fire-prone ecosystems. Understanding these mechanisms may provide new insights into the structure of ecological systems and be key in evaluating fire management strategies and sensitivities to climate change.
|
|
Robustness and the Internet: Theoretical Foundations pdf While control and communications theory have played a crucial role throughout in designing aspects of the Internet, a unified and integrated theory of the Internet as a whole has only recently become a practical and achievable research objective. Dramatic progress has been made recently in analytical results that provide for the first time a nascent but promising foundation for a rigorous and coherent mathematical theory underpinning Internet technology. This new theory addresses directly the performance and robustness of both the “horizontal” decentralized and asynchronous nature of control in TCP/IP as well as the “vertical” separation into the layers of the TCP/IP protocol stack from application down to the link layer. These results generalize notions of source and channel coding from information theory as well as decentralized versions of robust control. The new theoretical insights gained about the Internet also combine with our understanding of its origins and evolution to provide a rich source of ideas about complex systems in general. Most surprisingly, our deepening understanding from genomics and molecular biology has revealed that at the network and protocol level, cells and organisms are strikingly similar to technological networks, despite having completely different material substrates, evolution, and development/construction.
|
|
|
|
Highly
optimized tolerance and power laws
in dense and sparse
resource regimes ps pdf
Manning, M, Carlson, JM & Doyle, J (2005) Phys. Rev. E 72, article 016108 Power law cumulative frequency (P) vs. event size (l) distributions P(≥ l)∼ l-α are frequently cited as evidence for complexity and serve as a starting point for linking theoretical models and mechanisms with observed data. Systems exhibiting this behavior present fundamental mathematical challenges in probability and statistics. The broad span of length and time scales associated with heavy tailed processes often require special sensitivity to distinctions between discrete and continuous phenomena. A discrete Highly Optimized Tolerance (HOT) model, referred to as the Probability, Loss, Resource (PLR) model, gives the exponent α=1/d as a function of the dimension d of the underlying substrate in the sparse resource regime. This agrees well with data for wildfires, web file sizes, and electric power outages. However, another HOT model, based on a continuous (dense) distribution of resources, predicts α= 1+ 1/d . In this paper we describe and analyze a third model, the cuts model, which exhibits both behaviors but in different regimes. We use the cuts model to show all three models agree in the dense resource limit. In the sparse resource regime, the continuum model breaks down, but in this case, the cuts and PLR models are described by the same exponent.
|
|
Design degrees of freedom and
mechanisms for complexity ps pdf
Reynolds, D, Carlson, JM & Doyle, J (2002) Phys. Rev. E 66, article 016108. We develop a discrete spectrum of percolation forest fire models characterized by increasing design degrees of freedom (DDOF's). DDOF's are tuned to optimize the yield of trees after a single spark. In the limit of a single DDOF, the model is tuned to the critical density. Additional DDOF's allow for increasingly refined spatial patterns, associated with the cellular structures seen in HOT. The spectrum of models provides a clear illustration of the contrast between criticality and HOT, as well as a concrete quantitative example of how a sequence of robustness tradeoffs naturally arises when increasingly complex systems are developed through additional layers of design. Such tradeoffs are familiar in engineering and biology and are a central aspect of complex systems that can be characterized as HOT. |
|
Mutation, specialization, and
hypersensitivity in HOT ps pdf
Zhou, T, Carlson, JM & Doyle, J (2002) Proc. Nat. Acad. Sci. 99 , 2049-2054. We introduce a model of evolution in a community, in which individual organisms are represented by percolation lattice models. When an external perturbation impacts an occupied site, it destroys the corresponding connected cluster. The fitness is based on the number of occupied sites which survive. High fitness individuals arise through mutation and natural selection, and are characterized by cellular barrier patterns which prevent large losses in common disturbances. This model shows that HOT, which links complexity to robustness in designed systems, arises naturally through biological mechanisms. While the model represents a severe abstraction biological evolution, the fact that fitness is concrete and quantifiable allows us to isolate the effects associated with different causes in a manner which is difficult in a more realistic setting. |
|
Complexity and robustness ps pdf
Carlson, JM & Doyle, J (2002) Proc. Nat. Acad. Sci. 99 , 2538-2545. HOT was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes 1) highly structured, nongerenic, self-dissimilar internal configurations and 2) robust, yet fragile external behavior. HOT claims these are the most important features of complexity and are not accidents of evolution or artifices of engineering design, but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing both on real world examples and also model systems, particularly those from Self-Organized Criticality (SOC). |
|
Highly Optimized Tolerance in
epidemic models incorporating
local optimization
and regrowth ps
pdf Robert, C, Carlson, JM & Doyle, J (2001) Phys. Rev. E 63 , article 056122. In the context of a coupled map model of population dynamics, which includes the rapid spread of fatal epidemics, we investigate the consequences of two new feaures in HOT, a mechanism which describes how complexity arises in systems which are optimized for robust performance in the presence of a harsh external environment. Specifically, we (1) contrast global and local optimization criteria and (2) investigate the effects of time-dependent regrowth. We find that both local and global optimization lead to HOT states, which may differ in their specific layouts, but share many qualitative features. Time-dependent regrowth leads to HOT states which deviate from the optimal configurations in the corresponding static models in order to protect the system from slow (or impossible) regrowth which follows the largest losses and extinctions. While the associated map can exhibit complex, chaotic solutions, HOT states are confined to relatively simple dynamical regimes. |
|
Dynamics and changing
environments in HOT
ps
pdf Zhou, T & Carlson, JM (2001) Phys. Rev. E 62, 3197-3204. HOT is a mechanism for power laws in complex systems which combines engineering design and biological evolution with the familiar statistical approaches in physics. Once the system, the environment, and the optimization scheme have been specified, the HOT state is fixed and corresponds to the set of measure zero (typically a single point) in the configuration space which minimizes a cost function U. Here we explore the U-dependent structures in configuration space which are associated with departures from the optimal state. We introduce dynamics, quantified by an effective temperature T, such that T=0 corresponds to the original HOT state, while infinite T corresponds to completely random configurations. More generally, T defines the range in state space over which fluctuations are likely to be observed. In a fixed environment fluctuations always raise the average cost. However, in a time-dependent environment4, mobile configurations can lower the average U because they adjust more efficiently to changes. |
|
Power laws, HOT, and generalized
source coding ps
pdf Doyle, J & Carlson, JM (2000) Phys. Rev. Lett. 84, 5656-5659. We introduce a family of robust design problems for complex systems in uncertain environments which are based on tradeoffs between resource allocations and losses. Optimized solutions yield the "robust, yet fragile" feature of HOT and exhibit power law tails in the distributions of events for all but the special case of Shannon coding for data compression. In addition to data compression, we construct specific solutions for world wide web traffic and forest fires, and obtain excellent agreement with measured data. |
|
HOT: Robustness and design in complex
systems ps
pdf Carlson, JM & Doyle, J (2000) Phys. Rev. Lett. 84, 2529-2532. We introduce HOT, a mechanism that connects evolving structure and power laws in interconnected systems. HOT systems arise, e.g. in biology and engineeringr, where design and evolution create complex systems sharing common features, including (1) high efficiency, performance, and robustness to designed-for uncertainties, (2) hypersentivity to design flaws and unanticipated perturbations, (3) nongeneric, specialized, structured configurations, and (4) power laws. We introduce HOT states in the context of percolation, and contrast properties of the high density HOT states with random configurations near the critical point. While both cases exhibit power laws, only HOT states display properties (1-3) associated with design and evolution. |
|
HOT: A mechanism for power laws in
designed systems ps
pdf Carlson, JM & Doyle, J (1999) Phys. Rev. E 60, 1412-1427. We introduce a mechanism for generating power law distributions, referred to as HOT, which is motivated by biological organisms and advanced engineering technologies. Our focus is on systems which are optimized, either through natural selection or engineering design, to provide robust performance despite uncertain environments. We suggest that power laws in these systems are due to tradeoffs between yield, cost of resources, and tolerance to risks. These tradeoffs lead to highly optimized designs that allow for occasionally large events. We investigate the mechanisms in the context of percolation and sand pile models in order to emphasize the sharp contrasts between HOT and self-organized criticality (SOC), which has been widely suggested as the origin for power laws in complex systems. Like SOC, HOT produces power laws. However, compared to SOC, HOT states exist for densities which are higher than the criticaly desnity, and the power laws are not restricted to special values of the density. The characteristic feature of HOT systems include: (1) high efficiency, performance, and robustness to designed-for uncertainties, (2) hypersensitivity to design flaws and unanticipated perturbations, (3) nongeneric, specialized, structured configurations, and (4) power laws. The first three of these are in contrast to the traditional hallmarks of criticality, and are obtained by simply adding the element of design to percolation and sand pile models, which completely changes their characteristics. |
|
Multi-Scale Modeling |
|
Shear strain localization in elastodynamic rupture simulations pdf We study strain localization as an enhanced velocity weakening mechanism on earthquake faults. Fault friction is modeled using Shear Transformation Zone (STZ) Theory, a microscopic physical model for non-affine rearrangements in granular fault gouge. STZ Theory is implemented in spring slider and dynamic rupture models of faults. We compare dynamic shear localization to deformation that is uniform throughout the gouge layer, and find that localized slip enhances the velocity weakening of the gouge. Localized elastodynamic ruptures have larger stress drops and higher peak slip rates than ruptures with homogeneous strain.
|
|
Constraining Earthquake Source Inversions with GPS Data 1: Resolution Based Removal of Artifacts pdf We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield Earthquake. This earthquake was recorded at 13 1-Hz GPS receivers, which provides for a truly co-seismic dataset that can be used to infer the static-slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault, and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS dataset on the nonuniform grid and analyze the errors in the final model.
|
|
A constitutive model for fault gouge deformation in dynamic rupture simulations pdf In the context of numerical simulations of elastodynamic ruptures, we compare friction laws, including the linear slip-weakening (SW) law, the Dieterich-Ruina (DR) law, and the Free Volume (FV) law. The FV law is based on microscopic physics, incorporating Shear Transformation Zone (STZ) Theory which describes local, non-affine rearrangements within the granular fault gouge. A dynamic state variable models dilation and compaction of the gouge, and accounts for weakening and re-strengthening in the FV law. The principal difference between the FV law and the DR law is associated with the chacteristic weakening length scale L. In the FV law, LFV grows with increasing slip rate, while in the DR law LDR is independent of slip rate. The length scale for friction is observed to vary with slip velocity in laboratory experiments with simulated fault gouge, suggesting that the FV law captures an essential feature of gouge-filled faults. In simulations of spontaneous elastodynamic rupture, for equal energy dissipation the FV law produces ruptures with smaller nucleation lengths, lower peak slip velocities, and increased slip-weakening distances when compared to ruptures goverened by the SW or DR laws. We also examine generalizations of the DR and FV laws that incorporate rapid velocity weakening. The rapid weakening laws produce self-healing slip pulse ruptures for low initial shear loads. For parameters which produce identical net slip in the pulses of each rapid weakening friction law, the FV law exhibits a much shorter nucleation length, a larger slip-weakening distance, and less frictional energy dissipation than corresponding ruptures obtained using the DR law.
|
|
Strain localization in a shear transformation zone model for amorphous solids pdf We model a sheared disordered solid using the theory of Shear Transformation Zones (STZs). In this mean-field continuum model the density of zones is governed by an effective temperature that approaches a steady state value as energy is dissipated. We compare the STZ model to simulations by Shi, et al.(Phys. Rev. Lett. 98 185505 2007), finding that the model generates solutions that fit the data,exhibit strain localization, and capture important features of the localization process. We show that perturbations to the effective temperature grow due to an instability in the transient dynamics, but unstable systems do not always develop shear bands. Nonlinear energy dissipation processes interact with perturbation growth to determine whether a material exhibits strain localization. By estimating the effects of these interactions, we derive a criterion that determines which materials exhibit shear bands based on the initial conditions alone. We also show that the shear band width is not set by an inherent diffusion length scale but instead by a dynamical scale that depends on the imposed strain rate.
|
|
Momentum transport in granular flows pdf We investigate the error induced by only considering binary collisions in the momentum transport of hard-sphere granular materials, as is done in kinetic theories. In this process, we first present a general microscopic derivation of the momentum transport equation and compare it to the kinetic theory derivation, which relies on the binary collision assumption. These two derivations yield different microscopic expressions for the stress tensor, which we compare using simulations. This provides a quantitative bound on the regime where binary collisions dominate momentum transport and reveals that most realistic granular flows occur in the region of phase space where the binary collision assumption does not apply.
|
|
Force networks and the dynamic approach to jamming in sheared granular media pdf Diverging correlation lengths on either side of the jamming transition are used to formulate a rheological model of granular shear flow, based on the propagation of stress through force chain networks. The model predicts three distinct flow regimes, characterized by the shear rate dependence of the stress tensor, that have been observed in both simulations and experiments. The boundaries separating the flow regimes are quantitatively determined and testable. In the limit of jammed granular solids, the model predicts the observed anomalous scaling of the shear modulus and a new relation for the shear strain at yield.
|
|
Spatial force correlations in granular shear flow I: numerical evidence pdf We investigate the emergence of long-range correlations in granular shear flow. By increasing the density of a simulated granular flow we observe a spontaneous transition from a dilute regime, where interactions are dominated by binary collisions, to a dense regime characterized by large force networks and collective motions. With increasing density, interacting grains tend to form networks of simultaneous contacts due to the dissipative nature of collisions. We quantify the size of these networks by measuring correlations between grain forces and find that there are dramatic changes in the statistics of contact forces as the size of the networks increases.
|
|
Spatial force correlations in granular shear flow II: theoretical implications pdf Numerical simulations are used to test the kinetic theory constitutive relations of inertial granular shear flow. These predictions are shown to be accurate in the dilute regime, where only binary collisions are relevant, but underestimate the measured value in the dense regime, where force networks of size ξ are present. The discrepancy in the dense regime is due to non-collisional forces that we measure directly in our simulations and arise from elastic deformations of the force networks. We model the non-collisional stress by summing over all paths that elastic waves travel through force networks. This results in an analytical theory that successfully predicts the stress tensor over the entire inertial regime without any adjustable parameters.
|
|
Microstructure
and Modeling of Granular Materials pdf
G. Lois, thesis (2006) Here we explore properties of granular materials undergoing shear deformation, emphasizing how macroscopic properties arise from the microscopic interactions between grains. This is carried out using numerical simulations, which confirm that there is indeed a bulk rheology, independent of boundary conditions, that can be modeled using only characteristics of the granular packing. In these simulations we measure spatial force correlations to demonstrate that long-range correlation exists and arises from clusters of simultaneously contacting grains in dense regimes. The size of the clusters defines an important microscopic length-scale that diverges at the jamming transition, where the material first acquires a yield stress, and reveals the nature of grain-interactions. For small values of the length-scale grains interact solely through binary collisions whereas for large values we observe that clusters of simultaneous contacts, along with complex force-chain networks, spontaneously emerge. This network transition is accompanied by a dramatic transformation in the distribution of contact forces between grains that has been observed in previous simulations and experiments. These basic results regarding the microscopic grain-interactions are generic to granular media and have important consequences for constitutive modeling. In particular we show that kinetic theories, which assume binary collisions, only apply below the network transition. In this regime we show that Enskog kinetic theory agrees with data from the simulations. We then proceed to introduce two analytical theories that use the observed microscopic grain-interactions to make predictions. First we propose a new constitutive model-- the Force-Network model-- that quantitatively predicts constitutive relations using properties of the force-networks. Second we demonstrate that STZ theory, which predicts constitutive relations by assuming certain dynamical correlations in amorphous materials, is in agreement with both the microscopic motion of grains and measured constitutive relations in the network regime. |
|
Emergence
of multi-contact interactions in contact dynamics simulations of
granular shear flows pdf
G. Lois, A. Lemaitre and J. M. Carlson, Europhysics Letters 76, 318 (2006) We examine the binary collision assumption of hard-sphere kinetic theory in numerical simulations of sheared granular materials. For a wide range of densities and restitution coefficients we measure collisional and non-collisional contributions to the stress tensor and find that non-collisional effects dominate at large density and small restitution coefficient. In the regimes where the non-collisional contributions disappear, we test kinetic theory predictions for the pressure without any fitting parameters and find remarkable agreement. In the regimes where the non-collisional contributions become large, we observe groups of simultaneously interacting grains and determine the average multi-contact cluster size using measurements of spatial force correlations. |
|
Methodologies
for Earthquake Hazard Assessment: Model Uncertainty and the WGCEP-2002
Forecast pdf
Page, M. T. and J. M. Carlson (2006) Bull. Seism. Soc. Am. 96, 5, doi: 10.1785/0120050195 Model uncertainty is prevalent in Probabilistic Seismic Hazard Analysis (PSHA) because the underlying statistical signatures for hazard are unknown. While methods for incorporating parameter uncertainty of a particular model in PSHA are well-understood, methods for incorporating model uncertainty are more difficult to implement due to the high degree of dependence between different earthquake-recurrence models. We show that the method used by the 2002 Working Group on California Earthquake Probabilities (WGCEP-2002) to combine the probability distributions given by multiple earthquake recurrence models has several adverse effects on their result. In particular, WGCEP-2002 uses a linear combination of the models which ignores model dependence and leads to large uncertainty in the final hazard estimate. Furthermore, model weights were chosen based on data, which has the potential to systematically bias the final probability distribution. The weighting scheme used in the Working Group report also produces results which depend upon an arbitrary ordering of models. In addition to analyzing current statistical problems, we present alternative methods for rigorously incorporating model uncertainty into PSHA. |
|
Numerical Tests of Constitutive Laws for
Dense Granular Flows pdf G. Lois, A. Lemaitre and J. M. Carlson, Physical Review E 72, 051303 (2005) We numerically and theoretically study the macroscopic properties of dense, sheared granular materials. In this process we first consider an invariance in Newton's equations, explain how it leads to Bagnold's scaling, and discuss how it relates to the dynamics of granular temperature. Next we implement numerical simulations of granular materials in two different geometries-- simple shear and flow down an incline-- and show that measurements can be extrapolated from one geometry to the other. Then we observe nonaffine rearrangements of clusters of grains in response to shear strain and show that fundamental observations, which served as a basis for the shear transformation zone (STZ) theory of amorphous solids [M. L. Falk and J. S. Langer, Phys. Rev. E 57, 7192 (1998), M. R. S. Bull. 25, 40 (2000)], can be reproduced in granular materials. Finally we present constitutive equations for granular materials as proposed by Lemaitre [Phys. Rev. Lett. 89, 064303 (2002)], based on the dynamics of granular temperature and STZ theory, and show that they match remarkably well with our numerical data from both geometries. |
|
Distinguishing
Barriers and Asperities in Near-Source Ground Motion
pdf Page, M. T., E. M. Dunham, and J. M. Carlson (2005) J. Geophys. Res. - Solid Earth. 110, B11302, doi:10.1029/2005JB003736. We investigate the ground motion produced by rupture propagation through circular barriers and asperities in an otherwise homogeneous earthquake rupture. Using a three-dimensional finite-difference method, we analyze the effect of asperity radius, strength, and depth in a dynamic model with fixed rupture velocity. We gradually add complexity to the model, eventually approaching the behavior of a spontaneous dynamic rupture, to determine the origin of each feature in the ground motion. A barrier initially resists rupture, which induces rupture-front curvature. These effects focus energy on and off the fault, leading to a concentrated pulse from the barrier region and higher velocities at the surface. Finally, we investigate the scaling laws in a spontaneous dynamic model. We find that dynamic stress drop determines fault-parallel static offset, while the time it takes the barrier to break is a measure of fracture energy. Thus, given sufficiently strong heterogeneity, the prestress and yield stress (relative to sliding friction) of the barrier can both be determined from ground-motion measurements. In addition, we find that models with constraints on rupture velocity have less ground motion than constraint-free, spontaneous dynamic models with equivalent stress drops. This suggests that kinematic models with such constraints overestimate the actual stress heterogeneity of earthquakes. |
|
Boundary lubrication with a glassy interface pdf Recently introduced constitutive equations for the rheology of dense, disordered materials are investigated in the context of stick-slip experiments in boundary lubrication. The model is based on a generalization of the shear transformation zone (STZ) theory, in which plastic deformation is represented by a population of mesoscopic regions which may undergo nonaffine deformations in response to stress. The generalization we study phenomenologically incorporates the effects of aging and glassy relaxation. Under experimental conditions associated with typical transitions from stick-slip to steady sliding and stop-start tests, these effects can be dominant, although the full STZ description is necessary to account for more complex, chaotic transitions.
|
|
Near-source Ground Motion from
Steady State Dynamic Rupture
Pulses
pdf
supplement-pdf Dunham, E & Archuleta, R (2004) Geophys. Res. Lett. 32, L03302, doi:10.1029/2004GL021793. Ground motion from two-dimensional steady state dynamic ruptures is examined for both subshear and supershear rupture velocities. Synthetic seismograms demonstrate that coherent high-frequency information about the source process rapidly attenuates with distance from the fault for subshear ruptures. Such records provide almost no resolution of the spatial extent of the stress breakdown zone. At supershear speeds, S waves radiate away from the fault, preserving the full source spectrum and carrying an exact history of the slip velocity on both the fault-parallel and fault-normal components of motion, whose amplitudes are given by a function of rupture speed that vanishes at the square root of two times the S wave speed. The energy liberated from the strain field by the passage of a supershear rupture is partitioned into fracture energy dissipated within the fault zone and far-field S wave radiation. The partition depends on both the rupture velocity and the size of the breakdown zone. |
|
Dissipative interface waves and
the transient response of a
three dimensional sliding interface with Coulomb friction
pdf Dunham, E (2004) J. Mech. Phys. Solids. 53,327-357. We investigate the linearized response of two elastic half-spaces sliding past one another with constant Coulomb friction to small three dimensional perturbations. Starting with the assumption that friction always opposes slip velocity, we derive a set of linearized boundary conditions relating perturbations of shear traction to slip velocity. Friction introduces an effective viscosity transverse to the direction of the original sliding, but offers no additional resistance to slip aligned with the original sliding direction. The amplitude of transverse slip depends on a nondimensional parameter $\eta = c_s \tau_0 / \mu v_0$, where $\tau_0$ is the initial shear stress, $2 v_0$ is the initial slip velocity, $\mu$ is the shear modulus, and $c_s$ is the shear wave speed. As $\eta \to 0$, the transverse shear traction becomes negligible, and we find an azimuthally symmetric Rayleigh wave trapped along the interface. As $\eta \to \infty$, the inplane and antiplane wavesystems frictionally couple into an interface wave with a velocity that is directionally dependent, increasing from the Rayleigh speed in the direction of initial sliding up to the shear wave speed in the transverse direction. Except in these frictional limits and the specialization to two dimensional inplane geometry, the interface waves are dissipative. In addition to forward and backward propagating interface waves, we find that for $\eta > 1$, a third solution to the dispersion relation appears, corresponding to a damped standing wave mode. For large amplitude perturbations, the interface becomes isotropically dissipative. The behavior resembles the frictionless response in the extremely strong perturbation limit, except that the waves are damped. We extend the linearized analysis by presenting analytical solutions for the transient response of the medium to both line and point sources on the interface. The resulting self-similar slip pulses consist of the interface waves and head waves, and help explain the transmission of forces across fracture surfaces. Furthermore, we suggest that the $\eta \to \infty$ limit describes the sliding interface behind the crack edge for shear fracture problems in which the absolute level of sliding friction is much larger than any interfacial stress changes. |
|
Evidence for a supershear
transient during the 2002 Denali
Fault earthquake
pdf Dunham, E & Archuleta, R (2004) Bull. Seism. Soc. Am. 94, S256-S268. Elastodynamic considerations suggest that the acceleration of ruptures to supershear velocities is accompanied by the release of Rayleigh waves from the stress breakdown zone. These waves generate a secondary slip pulse trailing the rupture front, but manifest almost entirely in ground motion perpendicular to the fault in the near-source region. We construct a spontaneously propagating rupture model exhibiting these features, and use it to explain ground motions recorded during the 2002 Denali Fault earthquake at pump station 10, located 3km from the fault. We show that the initial pulses on both the fault normal and fault parallel components are due to the supershear stress release on the fault while the later arriving fault normal pulses result from the trailing subshear slip pulse on the fault. |
|
Coarse graining and control
theory model reduction ps
pdf
Reynolds, D.E. (submitted to J. Stat. Phys., 2003). We explain a method, inspired by control theory model reduction and interpolation theory, that rigorously establishes the types of coarse graining that are appropriate for systems with quadratic, generalized Hamiltonians. For such systems, general conditions are given that establish when local coarse grainings should be valid. Interestingly, our analysis provides a reduction method that is valid regardless of whether or not the system is isotropic. We provide the linear harmonic chain as a prototypical example. Additionally, these reduction techniques are based on the dynamic response of the system, and hence are also applicable to nonequilibrium systems. |
|
A Supershear Transition Mechanism
for Cracks
pdf Dunham, E, Favreau, P, & Carlson, JM (2003) Science 299, 1557-1559. |
|
Rupture pulse characterization:
Self-healing, self-similar,
expanding solutions in a continuum model of fault dynamics ps
pdf Nielson, SB & Carlson, JM (2000) Bull. Seismol. Soc. Amer. 90, 1480-1497. We investigate the dynamics of self-healing rupture pulses on a stressed fault, embedded in a three dimensional scalar medium. A state dependent friction law which incorporates rate weakening acts at the interface. When the system is sufficiently large that the solutions are not influenced by edge effects, we observe three distinct regimes numerically: (1) expanding cracks, (2) expanding pulses, and (3) arresting pulses. Using analytical arguments based on the balance of stress on the fault, we demonstrate that when a persistent pulse exists (regime 2), it expands as it propagates and displays self-similarity, akin to the classic crack solution. We define a dimensionless parameter H, which depends on the friction, the prestress, and properties of the medium. Numerical results reveal that H controls the transition between regimes where both crack and pulse solutions are allowed, and the regime where only arresting pulses are possible. The boundary which divides expanding crack and pulse solutions depends on local properties associated with the initiation of rupture. Finally, we extend the investigation of pulse properties to cases with well defined heterogeneities in the prestress. In this case, the pulse width is sensitive to the local variations, expanding or contracting as it runs into low or high stress regions, respectively. |
|
Influence of friction and fault
geometry on earthquake rupture ps
pdf Nielson, SB, Carlson, JM & Olsen, KB (2000) J. Geophys. Res.-Solid Earth 105, 6069-6088. We investigate the impact of variations in the friction and geometry on models of fault dynamics. We focus primarily on a three dimensional continuum model with scalar displacements. Slip occurs on an embedded two dimensional planar interface. Friction is characterized by a two parameter rate and state law, incorporating a characteristic length for weakening, a characteristic time for healing, and a velocity weakening steady state. As the friction parameters are varied there is a crossover from narrow, self-healing slip pulses, to crack-like solutions that heal in response to edge effects. For repeated ruptures the crack-like regime exhibits periodic or aperiodic systemwide events. The self-healing regime exhibits dynamical complexity and a broad distribution of rupture areas. The behavior can also change from periodicity or quasi-periodicity to dynamical complexity as the total fault size or the length to width ratio is increased. Our results for the continuum model agree qualitatively with analogous results obtained for a one dimensional Burridge--Knopoff model in which radiation effects are approximated by viscous dissipation. |
|
Bifurcations from steady sliding
to slip in boundary
lubrication ps
pdf Batista, AA & Carlson, JM (1998) Phys. Rev. E 57, 4986-4996. We explore the nature of the transitions between stick slip and steady sliding in models for boundary lubrication. The models are based on the rate and state approach which has been very successful in characterizing the behavior of dry interfaces [A. Ruina, J. Geophys. Res. 88, 10 359 (1983)]. Our models capture the key distinguishing features associated with surfaces separated by a few molecular layers of lubricant. Here we find that the transition from steady sliding to stick slip is typically discontinuous and sometimes hysteretic. When hysteresis is observed it is associated with a subcritical Hopf bifurcation. In either case, we observe a sudden and discontinuous onset in the amplitude of oscillations at the bifurcation point. |
|
Constitutive relation for the
friction between lubricated
surfaces ps
pdf Carlson, JM & Batista, AA (1996) Phys. Rev. E 53, 4153-4165. Motivated by recent experiments and numerical results, we propose a constitutive relation to describe the friction beween two surfaces separated by an atomically thin layer of lubricant molecules. Our phenomenological approach involves the development of a rate and state law to describe the macroscopic frictional properties of the system, in a manner similar to that which has been proposed previously by Ruina [J. Geophys. Res. 88, 10 359(1983)]] for the solid on solid case. In our case, the state variable is interpreted in terms of the shear melting of the lubricant, and the constitutive relation captures some of the primary experimental differences between the dry and lubricated systems. |