With Abstracts
 Validation of Coevolving Residue Algorithms via Pipeline Sensitivity Analysis : ELSC and OMES and ZNMI, oh my!
pdf
Christopher A. Brown and Kevin S. Brown, PLoS ONE, 5(6): e10779, (2010).  Pupillometric evidence for the decoupling of attention from perceptual input during offline thought
pdf  M. L. Manning, E. G. Daub, J. S. Langer, and J. M. Carlson, Phys Rev E, 79, 016110 (2009).
 Danielle S. Bassett, Jesse A. Brown, Vibhas Deshpande, Jean M. Carlson, Scott T. Grafton, Neuroimage, 15;54(2):126279 (2011).

Petrovic, N., David L. Alderson and Jean M. Carlson, PLoS ONE, 7(4): e33285. doi:10.1371/journal.pone.0033285, (2012).
 Learning, Memory, and the Role of Neural Network Architecture
pdf  Descriptor approach for eliminating spurious eigenvalues in hydrodynamic equations
pdf  The Suppression of Immune System Disorders by Passive Attrition
pdf
Stromberg, S. P., and J. M. Carlson, PLoS One 5(3):e9648, (2010).  Friction, Fracture, and Earthquakes
pdf
Daub, E. G., and J. M. Carlson, Ann. Rev. of Cond. Matt. Phys., Vol. 1: 397418, DOI: 10.1146/annurevconmatphys070909104025 (2010).  Deformation and Localization in Earthquake Ruptures and StickSlip Instabilities
pdf  Dynamics of Immune System Vulnerabilities
pdf  How simple can a fire regime model be while still approximating reality?
pdf
The fire regime of a landscape integrates the spatiotemporal pattern of ignitions, fuels, weather, and topography, and one way to describe a fire regime is through its fire size distribution. The purpose of this paper is to present the fire size distribution, both modeled and actual, of a chaparral landscape in southern California, and to progressively simplify the model inputs in order to better understand the main drivers of the fire regime. To that end we systematically simplified components of the HFire fire spread model including: the number of Santa Ana wind events per year, the distribution in wind speeds, topography, the fuels map, fuels regrowth, and the shape of the landscape. The simpler models retain a distribution whose slope/form is similar to that of the more complex model.  Wildfire resource allocations in conditions involving multiple threats
pdf
The economic, environmental, and social impact of wildfires is an increasing problem in California and elsewhere. The problem is exacerbated by growing development at the urban wildland interface, placing valuable assets at risk. Often numerous large fires occur simultaneously, so limited resources must be stretched across multiple sites. Tools are urgently needed to help make quick and effective decisions. In this work, we develop a framework to examine tradeoffs associated with fighting multiple fires. Firefighting decisions in California are currently made using FSPro, a spatial model that maps the probability of fire spread, and RAVAR, an economic model of threatened assets. These tools estimate the potential damage of each fire, which can then be used to compare the costs and benefits of different responses to multiple hot spots. We illustrate this concept using Hfire, a Rothermel based model for the spread of wildfire. By changing an extinction parameter we estimate the impact of various levels of suppression on fire size. We also incorporate the interplay between fire size and urban density in the decision making process. Tradeoffs between resource costs and assets at risk determine the go versus no go decision for an individual fire. We evaluate the cost of optimal and nonoptimal strategies during simultaneously burning fires. Finally we compare our results with data from the 14 simultaneous fires of October of 2003 and the 16 of October of 2007 to consider the decisions made in real situations.  Energetics of strain localization in a model of seismic slip
pdf  Stickslip instabilities and shear strain localization in amorphous materials
pdf
E. G. Daub and J. M. Carlson, Phys. Rev. E. 80, 066113 (2009).  Fire in the Earth System
pdf
David M. J. S. Bowman, J. K. Balch, P. A., William, J. Bond, J. M. Carlson, M. A. Cochrane, C. M. D’Antonio, R. S. DeFries, J. C. Doyle, S. P. Harrison, F. H. Johnston, J. E. Keeley, M. A. Krawchuk, C. A. Kull, J. B. Marston, M. A. Moritz, I. C. Prentice, C. I. Roos, A. C. Scott, T. W. Swetnam, G. R. van der Werf, S. J. Pyne, Science, Vol. 324. no. 5926, pp. 481  484 DOI: 10.1126/science.1163886, (2009).  Mapping live fuel moisture with MODIS data: A multiple regression approach
pdf
Live fuel moisture (LFM) is an important factor for ascertaining fire risk in shrublands located in Mediterranean climate regions. We examined empirical relationships between LFM and numerous vegetation indices calculated from MODIS composite data for two southern California shrub functional types, chaparral (evergreen) and coastal sage scrub (CSS, droughtdeciduous). These relationships were assessed during the annual March–September dry down period for both individual sites, and sites pooled by functional type. The visible atmospherically resistant index (VARI) consistently had the strongest relationships for individual site regressions. An independent method of accuracy assessment, cross validation, was used to determine model robustness for pooled site regressions. Regression models were developed with n − 1 datasets and tested on the dataset that was withheld. Additional variables were included in the regression models to account for sitespecific and interannual differences in vegetation amount and condition. This allowed a single equation to be used for a given functional type. Multiple linear regression models based on pooled sites had slightly lower adjusted R2 values compared with simple linear regression models for individual sites. The best regression models for chaparral and CSS were inverted, and LFM was mapped across Los Angeles County, California (LAC). The methods used in this research show promise for monitoring LFM in chaparral and may be applicable to other Mediterranean shrubland communities.  What grid cells convey about rat location
pdf
We characterize the relationship between the simultaneously recorded quantities of rodent grid cell firing and the position of the rat. The formalization reveals various properties of grid cell activity when considered as a neural code for representing and updating estimates of the rat's location. We show that, although the spatially periodic response of grid cells appears wasteful, the code is fully combinatorial in capacity. The resulting range for unambiguous position representation is vastly greater than the approximately 110 m periods of individual lattices, allowing for unique highresolution position specification over the behavioral foraging ranges of rats, with excess capacity that could be used for error correction. Next, we show that the merits of the grid cell code for position representation extend well beyond capacity and include arithmetic properties that facilitate position updating. We conclude by considering the numerous implications, for downstream readouts and experimental tests, of the properties of the grid cell code.  Improving Human Brain Mapping via Joint Inversion of Brain Electrodynamics and the BOLD Signal
pdf
Brown, K., Ortigue, S., Grafton, S., and Carlson, J.M., in revision, NeuroImage, (2009).We present several methods to improve the resolution of human brain mapping by combining information obtained from surface electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) of the same participant performing the same task in separate imaging sessions. As an initial step in our methods we used independent component analysis (ICA) to obtain taskrelated sources for both EEG and fMRI. We then used that information in an integrated cost function that attempts to match both data sources and trades goodness of ﬁt in one regime for another. We compared the performance and drawbacks of each method in localizing sources for a dual visual evoked response experiment, and we contrasted the results of adding fMRI information to simple EEGonly inversion methods. We found that adding fMRI information in a variety of ways gives superior results to classical minimum norm source estimation. Our ﬁndings lead us to favor a method which attempts to match EEG scalp dynamics along with voxel power obtained from ICAprocessed blood oxygenation level dependent (BOLD) data; this method of joint inversion enables us to treat the two data sources as symmetrically as possible.
 Pulselike, cracklike, and supershear earthquake ruptures with shear strain localization
pdf
E.G. Daub, M.L. Manning, and J.M. Carlson, submitted to J. Geophys. Res., 115,, B05311, doi:10.1029/2009JB006388 (2010).We incorporate shear strain localization into spontaneous elastodynamic rupture simulations using a Shear Transformation Zone (STZ) friction law. In the STZ model, plastic strain in the granular fault gouge occurs in local regions called STZs. The number density of STZs is governed by an effective disorder temperature, and regions with elevated effective temperature have an increased strain rate. STZ Theory resolves the dynamic evolution of the effective temperature across the width of the fault zone. Shear bands spontaneously form in the model due to feedbacks amplifying heterogeneities in the initial effective temperature. In dynamic earthquake simulations, strain localization is a mechanism for dynamic fault weakening. A shear band dynamically forms, reduces the sliding stress, and decreases the frictional energy dissipation on the fault. We investigate the effect of the dynamic weakening due to localization in generating pulselike, cracklike, and supershear rupture. Our results illustrate that the additional weakening and reduction of onfault energy dissipation due to localization have a significant impact on the initial shear stress required for supershear or pulselike rupture to propagate on a fault.
 Seismicity in a Model Governed by Competing Frictional Weakening and Healing Mechanisms
pdf
G. Hillers, J.M. Carlson, and R. J. Archuleta, submitted to Geophysical Journal International.Observations from laboratory, field, and numerical work spanning a wide range of space and time scales suggest a strain dependent progressive evolution of material properties that control the stability of earthquake faults. The associated weakening mechanisms are counterbalanced by a variety of restrengthening mechanisms. The efficiency of the healing processes depends on local crustal properties such as temperature and hydraulic conditions. We investigate the relative effects of these competing nonlinear feedbacks on seismogenesis in the context of evolving frictional properties, using a mechanical earthquake model that is governed by slip weakening friction. Weakening and strengthening mechanisms are parameterized by the evolution of the frictional control variablethe slip weakening rate Rusing empirical relationships obtained from laboratory experiments. Weakening depends on the slip of a model earthquake and tends to increase R, following the behavior of real and simulated frictional interfaces. Healing causes R to decrease and depends on the time passed since the last slip. Results from models with these competing feedbacks are compared with simulations using nonevolving friction. Compared to fixed R conditions, evolving properties result in a significantly increased variability in the system dynamics. We find that for a given set of weakening parameters the resulting seismicity patterns are sensitive to details of the restrengthening process, such as the healing rate b and a lower cutoff time, tc, up to which no significant change in the friction parameter is observed. For relatively large and small cutoff times, the statistics are typical of fixed large and small R values, respectively. However, a wide range of intermediate values leads to significant fluctuations in the internal energy levels. The frequencysize statistics of earthquake occurrence show corresponding nonstationary characteristics on times scales over which negligible fluctuations are observed in the fixedR case. The progressive evolution implies thatexcept for extreme weakening and healing ratesfaults and fault networks possibly are not well characterized by steady states on typical catalog time scales, thus highlighting the essential role of memory and history dependence in seismogenesis. The results suggest that an extrapolation to future seismicity occurrence based on temporally limited data may be misleading due to variability in seismicity patterns associated with competing mechanisms that affect fault stability.
 Shear strain localization in elastodynamic rupture simulations
pdf
E.G. Daub, M.L. Manning, and J.M. Carlson, Geophysical Research Letters, 35, L12310, doi:10.1029/2008GL033835.We study strain localization as an enhanced velocity weakening mechanism on earthquake faults. Fault friction is modeled using Shear Transformation Zone (STZ) Theory, a microscopic physical model for nonaffine rearrangements in granular fault gouge. STZ Theory is implemented in spring slider and dynamic rupture models of faults. We compare dynamic shear localization to deformation that is uniform throughout the gouge layer, and find that localized slip enhances the velocity weakening of the gouge. Localized elastodynamic ruptures have larger stress drops and higher peak slip rates than ruptures with homogeneous strain.
 Pathogen Induced Tolerance
pdf
S. P. Stromberg and J. M. Carlson, PLoS One, in review, (2010).  Using ICA and realistic BOLD models to obtain joint EEG/fMRI solutions to the problem of source localization
pdf
T. Brookings, S. Ortigue, S. Grafton, and J. Carlson, NeuroImage, 15;44(2):41120,(2009).
 Constraining Earthquake Source Inversions with GPS Data 1: Resolution Based Removal of Artifacts
pdf
Page, Morgan T., Susana Custódio, Ralph J. Archuleta, and J. M. Carlson, submitted to JGR  Solid Earth.We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield Earthquake. This earthquake was recorded at 13 1Hz GPS receivers, which provides for a truly coseismic dataset that can be used to infer the staticslip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault, and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in wellresolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering smallscale structure near the surface. Finally, we present an inversion of the Parkfield GPS dataset on the nonuniform grid and analyze the errors in the final model.
 A constitutive model for fault gouge deformation in dynamic rupture simulations
pdf
E.G. Daub and J.M. Carlson, Journal of Geophysical Research, 113, B12309, doi:10.1029/2007JB003577.In the context of numerical simulations of elastodynamic ruptures, we compare friction laws, including the linear slipweakening (SW) law, the DieterichRuina (DR) law, and the Free Volume (FV) law. The FV law is based on microscopic physics, incorporating Shear Transformation Zone (STZ) Theory which describes local, nonaffine rearrangements within the granular fault gouge. A dynamic state variable models dilation and compaction of the gouge, and accounts for weakening and restrengthening in the FV law. The principal difference between the FV law and the DR law is associated with the chacteristic length scale L. In the FV law, L_FV grows with increasing slip rate, while in the DR law L_DR is independent of slip rate. The length scale for friction is observed to vary with slip velocity in laboratory experiments with simulated fault gouge, suggesting that the FV law captures an essential feature of gougefilled faults. In simulations of spontaneous elastodynamic rupture, for equal energy dissipation the FV law produces ruptures with smaller nucleation lengths, lower peak slip velocities, and increased slip required for friction to fully weaken to steady sliding when compared to ruptures goverened by the SW or DR laws. We also examine generalizations of the DR and FV laws that incorporate rapid velocity weakening. The rapid weakening laws produce selfhealing slip pulse ruptures for low initial shear loads. For parameters which produce identical net slip in the pulses of each rapid weakening friction law, the FV law exhibits a much shorter nucleation length, a larger slipweakening distance, and less frictional energy dissipation than corresponding ruptures obtained using the DR law.
 Momentum transport in granular flows
pdf
Lois, G., Lemaitre, A., and Carlson, J.M., Computers and Mathematics with Applications 55, 175183 (2008)We investigate the error induced by only considering binary collisions in the momentum transport of hardsphere granular materials, as is done in kinetic theories. In this process, we first present a general microscopic derivation of the momentum transport equation and compare it to the kinetic theory derivation, which relies on the binary collision assumption. These two derivations yield different microscopic expressions for the stress tensor, which we compare using simulations. This provides a quantitative bound on the regime where binary collisions dominate momentum transport and reveals that most realistic granular flows occur in the region of phase space where the binary collision assumption does not apply.
 Strain localization in a shear transformation zone model for amorphous solids
pdf
M.L. Manning, J.S. Langer, and J.M. Carlson, Phys. Rev. E 76 056106 (2007)We model a sheared disordered solid using the theory of Shear Transformation Zones (STZs). In this meanfield continuum model the density of zones is governed by an effective temperature that approaches a steady state value as energy is dissipated. We compare the STZ model to simulations by Shi, et al.(Phys. Rev. Lett. 98 185505 2007), finding that the model generates solutions that fit the data,exhibit strain localization, and capture important features of the localization process. We show that perturbations to the effective temperature grow due to an instability in the transient dynamics, but unstable systems do not always develop shear bands. Nonlinear energy dissipation processes interact with perturbation growth to determine whether a material exhibits strain localization. By estimating the effects of these interactions, we derive a criterion that determines which materials exhibit shear bands based on the initial conditions alone. We also show that the shear band width is not set by an inherent diffusion length scale but instead by a dynamical scale that depends on the imposed strain rate.
 Steadystate, effectivetemperature dynamics in a glassy material.
pdf
J. S. Langer and M. L. Manning. Phys. Rev. E 76 056107 (2007).  Force networks and the dynamic approach to jamming in sheared granular media
pdf
Lois, G., and Carlson, J.M., Europhys. Lett.80, 58001 (2007)Diverging correlation lengths on either side of the jamming transition are used to formulate a rheological model of granular shear flow, based on the propagation of stress through force chain networks. The model predicts three distinct flow regimes, characterized by the shear rate dependence of the stress tensor, that have been observed in both simulations and experiments. The boundaries separating the flow regimes are quantitatively determined and testable. In the limit of jammed granular solids, the model predicts the observed anomalous scaling of the shear modulus and a new relation for the shear strain at yield.
 Spatial force correlations in granular shear flow I: numerical evidence
pdf
Lois, G., Lemaitre, A., and Carlson, J. M., Phys. Rev. E.76, 021302 (2007)We investigate the emergence of longrange correlations in granular shear flow. By increasing the density of a simulated granular flow we observe a spontaneous transition from a dilute regime, where interactions are dominated by binary collisions, to a dense regime characterized by large force networks and collective motions. With increasing density, interacting grains tend to form networks of simultaneous contacts due to the dissipative nature of collisions. We quantify the size of these networks by measuring correlations between grain forces and find that there are dramatic changes in the statistics of contact forces as the size of the networks increases.
 Spatial force correlations in granular shear flow II: theoretical implications
pdf
Lois, G., Lemaitre, A., and Carlson, J. M., Phys. Rev. E.76, 021303 (2007)Numerical simulations are used to test the kinetic theory constitutive relations of inertial granular shear flow. These predictions are shown to be accurate in the dilute regime, where only binary collisions are relevant, but underestimate the measured value in the dense regime, where force networks of size ? are present. The discrepancy in the dense regime is due to noncollisional forces that we measure directly in our simulations and arise from elastic deformations of the force networks. We model the noncollisional stress by summing over all paths that elastic waves travel through force networks. This results in an analytical theory that successfully predicts the stress tensor over the entire inertial regime without any adjustable parameters.
 Modeling Longterm Fire Regimes of Southern California Shrublands
pdf
Peterson, Seth H., Moritz, M.A., Morais, M.E., Dennison, P.E. and Carlson, J.M. International Journal of Wildland Fire, 20, 116 (2011).  Using HFire for Spatial Modeling of Fire in Shrublands
pdf
Peterson, Seth H.; Morais, Marco E.; Carlson, Jean M.; Dennison, Philip E.; Roberts, Dar A.; Moritz, Max A.; Weise, David R. Res. Pap. PSWRP259, Albany, CA: U.S. Department of Agriculture, Forest Service, Pacific Southwest Research Station. 44 p.A raster based, spatially explicit model of surface fire spread called HFire is introduced. HFire uses the Rothermel fire spread equation to determine one dimensional fire spread, which is then fit to two dimensions using the solution to the fire containment problem and the empirical double ellipse formulation of Anderson. HFire borrows the idea of an adaptive time step from previous cell contact raster models and permits fire to spread into a cell from all neighboring cells over multiple time steps as is done in the heat accumulation approach. The model has been developed to support simulations of single fire events and long term fire regimes. The model implements equations for surface fire spread and is appropriate for use in grass or shrubland functional types. Model performance on a synthetic landscape, under controlled conditions was benchmarked using a standard set of tests developed initially to evaluate FARSITE. Additionally, simulations of two Southern California fires spreading through heterogeneous fuels, under realistic conditions showed similar performance between HFire and FARSITE, good agreement to historical reference data, and shorter model run times for HFire. HFire is available for download: http://firecenter.berkeley.edu/hfire.
 Microstructure and Modeling of Granular Materials
pdf
G. Lois, thesis (2006)Here we explore properties of granular materials undergoing shear deformation, emphasizing how macroscopic properties arise from the microscopic interactions between grains. This is carried out using numerical simulations, which confirm that there is indeed a bulk rheology, independent of boundary conditions, that can be modeled using only characteristics of the granular packing. In these simulations we measure spatial force correlations to demonstrate that longrange correlation exists and arises from clusters of simultaneously contacting grains in dense regimes. The size of the clusters defines an important microscopic lengthscale that diverges at the jamming transition, where the material first acquires a yield stress, and reveals the nature of graininteractions. For small values of the lengthscale grains interact solely through binary collisions whereas for large values we observe that clusters of simultaneous contacts, along with complex forcechain networks, spontaneously emerge. This network transition is accompanied by a dramatic transformation in the distribution of contact forces between grains that has been observed in previous simulations and experiments.
These basic results regarding the microscopic graininteractions are generic to granular media and have important consequences for constitutive modeling. In particular we show that kinetic theories, which assume binary collisions, only apply below the network transition. In this regime we show that Enskog kinetic theory agrees with data from the simulations. We then proceed to introduce two analytical theories that use the observed microscopic graininteractions to make predictions. First we propose a new constitutive model the ForceNetwork model that quantitatively predicts constitutive relations using properties of the forcenetworks. Second we demonstrate that STZ theory, which predicts constitutive relations by assuming certain dynamical correlations in amorphous materials, is in agreement with both the microscopic motion of grains and measured constitutive relations in the network regime.
 Emergence
of multicontact interactions in contact dynamics simulations of
granular shear flows
pdf
G. Lois, A. Lemaitre and J. M. Carlson, Europhysics Letters 76, 318 (2006)We examine the binary collision assumption of hardsphere kinetic theory in numerical simulations of sheared granular materials. For a wide range of densities and restitution coefficients we measure collisional and noncollisional contributions to the stress tensor and find that noncollisional effects dominate at large density and small restitution coefficient. In the regimes where the noncollisional contributions disappear, we test kinetic theory predictions for the pressure without any fitting parameters and find remarkable agreement. In the regimes where the noncollisional contributions become large, we observe groups of simultaneously interacting grains and determine the average multicontact cluster size using measurements of spatial force correlations.
 Methodologies
for Earthquake Hazard Assessment: Model Uncertainty and the WGCEP2002
Forecast
pdf
Page, M. T. and J. M. Carlson (2006) Bull. Seism. Soc. Am. 96, 5, doi: 10.1785/0120050195Model uncertainty is prevalent in Probabilistic Seismic Hazard Analysis (PSHA) because the underlying statistical signatures for hazard are unknown. While methods for incorporating parameter uncertainty of a particular model in PSHA are wellunderstood, methods for incorporating model uncertainty are more difficult to implement due to the high degree of dependence between different earthquakerecurrence models. We show that the method used by the 2002 Working Group on California Earthquake Probabilities (WGCEP2002) to combine the probability distributions given by multiple earthquake recurrence models has several adverse effects on their result. In particular, WGCEP2002 uses a linear combination of the models which ignores model dependence and leads to large uncertainty in the final hazard estimate. Furthermore, model weights were chosen based on data, which has the potential to systematically bias the final probability distribution. The weighting scheme used in the Working Group report also produces results which depend upon an arbitrary ordering of models. In addition to analyzing current statistical problems, we present alternative methods for rigorously incorporating model uncertainty into PSHA.
 Robustness and Fragility in
Immunosenescence
Small pdf Large pdf
Stromberg SP, Carlson J (2006) PLoS Comput Biol 2(11): e160. doi:10.1371/journal.pcbi.002016We construct a model to study tradeoffs associated with aging in the adaptive immune system, focusing on cumulative effects of replacing naive cells with memory cells. Binding affinities are characterized by a stochastic shape space model. Loss is measured in terms of total antigen population over the course of an infection. We monitor evolution of cell populations on the shape space over a string of infections, and find that the distribution of losses becomes increasingly heavy tailed with time. Initially this lowers the average loss: the memory cell population becomes tuned to the history of past exposures, reducing the loss of the system when subjected to a second, similar infection. This is accompanied by a corresponding increase in vulnerability to novel infections, which ultimately causes the expected loss to increase due to overspecialization, leading to increasing fragility with age (i.e. immunosenescence). In our model, immunosenescence is not the result of a performance degradation of some specific lymphocyte, but rather a natural consequence of the built in mechanisms for system adaptation. This "robust, yet fragile" behavior is a key signature of Highly Optimized Tolerance (HOT).
 Numerical Tests of Constitutive Laws for
Dense Granular Flows
pdf
G. Lois, A. Lemaitre and J. M. Carlson, Physical Review E 72, 051303 (2005).We numerically and theoretically study the macroscopic properties of dense, sheared granular materials. In this process we first consider an invariance in Newton's equations, explain how it leads to Bagnold's scaling, and discuss how it relates to the dynamics of granular temperature. Next we implement numerical simulations of granular materials in two different geometries simple shear and flow down an incline and show that measurements can be extrapolated from one geometry to the other. Then we observe nonaffine rearrangements of clusters of grains in response to shear strain and show that fundamental observations, which served as a basis for the shear transformation zone (STZ) theory of amorphous solids [M. L. Falk and J. S. Langer, Phys. Rev. E 57, 7192 (1998), M. R. S. Bull. 25, 40 (2000)], can be reproduced in granular materials. Finally we present constitutive equations for granular materials as proposed by Lemaitre [Phys. Rev. Lett. 89, 064303 (2002)], based on the dynamics of granular temperature and STZ theory, and show that they match remarkably well with our numerical data from both geometries.
 Distinguishing
Barriers and Asperities in NearSource Ground Motion
pdf
Page, M. T., E. M. Dunham, and J. M. Carlson (2005) J. Geophys. Res.  Solid Earth. 110, B11302, doi:10.1029/2005JB003736.We investigate the ground motion produced by rupture propagation through circular barriers and asperities in an otherwise homogeneous earthquake rupture. Using a threedimensional finitedifference method, we analyze the effect of asperity radius, strength, and depth in a dynamic model with fixed rupture velocity. We gradually add complexity to the model, eventually approaching the behavior of a spontaneous dynamic rupture, to determine the origin of each feature in the ground motion. A barrier initially resists rupture, which induces rupturefront curvature. These effects focus energy on and off the fault, leading to a concentrated pulse from the barrier region and higher velocities at the surface. Finally, we investigate the scaling laws in a spontaneous dynamic model. We find that dynamic stress drop determines faultparallel static offset, while the time it takes the barrier to break is a measure of fracture energy. Thus, given sufficiently strong heterogeneity, the prestress and yield stress (relative to sliding friction) of the barrier can both be determined from groundmotion measurements. In addition, we find that models with constraints on rupture velocity have less ground motion than constraintfree, spontaneous dynamic models with equivalent stress drops. This suggests that kinematic models with such constraints overestimate the actual stress heterogeneity of earthquakes.
 Highly optimized tolerance and power
laws in dense and sparse resource regimes
ps pdf
Manning, M, Carlson, JM & Doyle, J (2005) Phys. Rev. E 72, article 016108.Power law cumulative frequency (P) versus event size (l) distributions P(>= l)similar to l(alpha) are frequently cited as evidence for complexity and serve as a starting point for linking theoretical models and mechanisms with observed data. Systems exhibiting this behavior present fundamental mathematical challenges in probability and statistics. The broad span of length and time scales associated with heavy tailed processes often require special sensitivity to distinctions between discrete and continuous phenomena. A discrete highly optimized tolerance (HOT) model, referred to as the probability, loss, resource (PLR) model, gives the exponent alpha=1/d as a function of the dimension d of the underlying substrate in the sparse resource regime. This agrees well with data for wildfires, web file sizes, and electric power outages. However, another HOT model, based on a continuous (dense) distribution of resources, predicts alpha=1+1/d. In this paper we describe and analyze a third model, the cuts model, which exhibits both behaviors but in different regimes. We use the cuts model to show all three models agree in the dense resource limit. In the sparse resource regime, the continuum model breaks down, but in this case, the cuts and PLR models are described by the same exponent.
 Three mechanisms for power laws on the Cayley tree
ps pdf
Brookings, T, Carlson, JM & Doyle, J (2005) Phys. Rev. E 72, article 056120.We compare preferential growth, critical phase transitions, and highly optimized tolerance HOT as mechanisms for generating power laws in the familiar and analytically tractable context of lattice percolation and forest fire models on the Cayley tree. All three mechanisms have been widely discussed in the context of complexity in natural and technological systems. This parallel study enables direct comparison of the mechanisms and associated lattice solutions. Criticality fits most naturally into the category of random processes, where power laws are a consequence of fluctuations in an ensemble with no intrinsic scale. The power laws in preferential growth can be understood in the context of competing exponential growth and decay processes. HOT generalizes this functional mechanism involving exponentials of exponentials to a broader class of nonexponential functions, which arise from optimization.
 Robustness and the Internet: Theoretical Foundations
pdf
Doyle, J.C. S. Low, S., Carlson, J.M., Paganini, F., Vinnicombe, G., Willinger, W., and Parillo, P. (2005) Robust Design: A Repertoire of Biological, Ecological, and Engineering Case Studies (Santa Fe Institute Studies on the Sciences of Complexity) , Eric Jen, Editor, Oxford University Press.
While control and communications theory have played a crucial role throughout in designing aspects of the Internet, a unified and integrated theory of the Internet as a whole has only recently become a practical and achievable research objective. Dramatic progress has been made recently in analytical results that provide for the first time a nascent but promising foundation for a rigorous and coherent mathematical theory underpinning Internet technology. This new theory addresses directly the performance and robustness of both the horizontal decentralized and asynchronous nature of control in TCP/IP as well as the vertical separation into the layers of the TCP/IP protocol stack from application down to the link layer. These results generalize notions of source and channel coding from information theory as well as decentralized versions of robust control. The new theoretical insights gained about the Internet also combine with our understanding of its origins and evolution to provide a rich source of ideas about complex systems in general. Most surprisingly, our deepening understanding from genomics and molecular biology has revealed that at the network and protocol level, cells and organisms are strikingly similar to technological networks, despite having completely different material substrates, evolution, and development/construction.
 Wildfires, Complexity and Highly Optimized Tolerance
pdf
Moritz, M., Morias, M., Summerell, L., Carlson, J.M., and J. Doyle (2005) Proc. Nat. Acad. Sci.102, 1791217917.Recent, large fires in the western United States have rekindled debates about fire management and the role of natural fire regimes in the resilience of terrestrial ecosystems. This realworld experience parallels debates involving abstract models of forest fires, a central metaphor in complex systems theory. Both real and modeled fireprone landscapes exhibit roughly power law statistics in fire size versus frequency. Here, we examine historical fire catalogs and a detailed fire simulation model; both are in agreement with a highly optimized tolerance model. Highly optimized tolerance suggests robustness tradeoffs underlie resilience in different fireprone ecosystems. Understanding these mechanisms may provide new insights into the structure of ecological systems and be key in evaluating fire management strategies and sensitivities to climate change.
 Evolutionary Dynamics and Highly Optimized Tolerance
pdf
Zhou, T., Carlson, J.M., and Doyle, J. (2005) J. Theor. Bio. 236 , 438447.We develop a numerical model of a lattice community based on Highly Optimized Tolerance (HOT), which relates the evolution of complexity to robustness tradeoffs in an uncertain environment. With the model, we explore scenarios for evolution and extinction which are abstractions of processes which are commonly discussed in biological and ecological case studies. These include the effects of different habitats on the phenotypic traits of the organisms, the effects of different mutation rates on adaptation, fitness, and diversity, and competition between generalists and specialists. The model exhibits a wide variety of microevolutionary and macroevolutionary phenomena which can arise in organisms which are subject to random mutation, and selection based on fitness evaluated in a specific environment. Generalists arise in uniform habitats, where different disturbances occur with equal frequency, while specialists arise when the relative frequency of different disturbances is skewed. Fast mutators are seen to play a primary role in adaptation, while slow mutators preserve welladapted configurations. When uniform and skewed habitats are coupled through migration of the organisms, we observe a primitive form of punctuated equilibrium. Rare events in the skewed habitat lead to extinction of the specialists, whereupon generalists invade from the uniform habitat, adapt to their new surroundings, ultimately leading their progeny to become vulnerable to extinction in a subsequent rare disturbance.
 Boundary lubrication with a glassy interface
pdf
A. Lemaitre and J. M. Carlson, Physical Review E 69, 61611 (2004)Recently introduced constitutive equations for the rheology of dense, disordered materials are investigated in the context of stickslip experiments in boundary lubrication. The model is based on a generalization of the shear transformation zone (STZ) theory, in which plastic deformation is represented by a population of mesoscopic regions which may undergo nonaffine deformations in response to stress. The generalization we study phenomenologically incorporates the effects of aging and glassy relaxation. Under experimental conditions associated with typical transitions from stickslip to steady sliding and stopstart tests, these effects can be dominant, although the full STZ description is necessary to account for more complex, chaotic transitions.
 Nearsource Ground Motion from Steady State Dynamic Rupture
Pulses
pdf supplementpdf
Dunham, E & Archuleta, R (2004) Geophys. Res. Lett. 32, L03302, doi:10.1029/2004GL021793.Ground motion from twodimensional steady state dynamic ruptures is examined for both subshear and supershear rupture velocities. Synthetic seismograms demonstrate that coherent highfrequency information about the source process rapidly attenuates with distance from the fault for subshear ruptures. Such records provide almost no resolution of the spatial extent of the stress breakdown zone. At supershear speeds, S waves radiate away from the fault, preserving the full source spectrum and carrying an exact history of the slip velocity on both the faultparallel and faultnormal components of motion, whose amplitudes are given by a function of rupture speed that vanishes at the square root of two times the S wave speed. The energy liberated from the strain field by the passage of a supershear rupture is partitioned into fracture energy dissipated within the fault zone and farfield S wave radiation. The partition depends on both the rupture velocity and the size of the breakdown zone.
 Dissipative interface waves and
the transient response of a
three dimensional sliding interface with Coulomb friction
pdf
Dunham, E (2004) J. Mech. Phys. Solids. 53,327357.We investigate the linearized response of two elastic halfspaces sliding past one another with constant Coulomb friction to small threedimensional perturbations. Starting with the assumption that friction always opposes slip velocity, we derive a set of linearized boundary conditions relating perturbations of shear traction to slip velocity. Friction introduces an effective viscosity transverse to the direction of the original sliding, but offers no additional resistance to slip aligned with the original sliding direction. The amplitude of transverse slip depends on a nondimensional parameter η=c_{s}τ_{0}/μv_{0}, where τ_{0} is the initial shear stress, 2v_{0} is the initial slip velocity, μ is the shear modulus, and c_{s} is the shear wave speed. As η>0, the transverse shear traction becomes negligible, and we find an azimuthally symmetric Rayleigh wave trapped along the interface. As η>∞, the inplane and antiplane wavesystems frictionally couple into an interface wave with a velocity that is directionally dependent, increasing from the Rayleigh speed in the direction of initial sliding up to the shear wave speed in the transverse direction. Except in these frictional limits and the specialization to twodimensional inplane geometry, the interface waves are dissipative. In addition to forward and backward propagating interface waves, we find that for η>1, a third solution to the dispersion relation appears, corresponding to a damped standing wave mode. For largeamplitude perturbations, the interface becomes isotropically dissipative. The behavior resembles the frictionless response in the extremely strong perturbation limit, except that the waves are damped. We extend the linearized analysis by presenting analytical solutions for the transient response of the medium to both line and point sources on the interface. The resulting selfsimilar slip pulses consist of the interface waves and head waves, and help explain the transmission of forces across fracture surfaces. Furthermore, we suggest that the η>∞ limit describes the sliding interface behind the crack edge for shear fracture problems in which the absolute level of sliding friction is much larger than any interfacial stress changes.
 Evidence for a supershear transient during the 2002 Denali earthquake
pdf
Dunham, E & Archuleta, R (2004) Bull. Seism. Soc. Am. 94, S256S268.Elastodynamic considerations suggest that the acceleration of ruptures to supershear velocities is accompanied by the release of Rayleigh waves from the stress breakdown zone. These waves generate a secondary slip pulse trailing the rupture front, but manifest almost entirely in ground motion perpendicular to the fault in the nearsource region. We construct a spontaneously propagating rupture model exhibiting these features, and use it to explain ground motions recorded during the 2002 Denali Fault earthquake at pump station 10, located 3km from the fault. We show that the initial pulses on both the fault normal and fault parallel components are due to the supershear stress release on the fault while the later arriving fault normal pulses result from the trailing subshear slip pulse on the fault.
 Coarse graining and control
theory model reduction
ps pdf
Reynolds, D.E. (submitted to J. Stat. Phys., 2003).We explain a method, inspired by control theory model reduction and interpolation theory, that rigorously establishes the types of coarse graining that are appropriate for systems with quadratic, generalized Hamiltonians. For such systems, general conditions are given that establish when local coarse grainings should be valid. Interestingly, our analysis provides a reduction method that is valid regardless of whether or not the system is isotropic. We provide the linear harmonic chain as a prototypical example. Additionally, these reduction techniques are based on the dynamic response of the system, and hence are also applicable to nonequilibrium systems.
 A Supershear Transition Mechanism
for Cracks
pdf
Dunham, E, Favreau, P, & Carlson, JM (2003) Science 299, 15571559.Seismic data indicate that fault ruptures follow complicated paths with variable velocity because of inhomogeneities in initial stress or fracture energy. We report a phenomenon unique to threedimensional cracks: Locally stronger fault sections, rather than slowing ruptures, drive them forward at velocities exceeding the shear wave speed. This supershear mechanism differentiates barrier and asperity models of fault heterogeneity, which previously have been regarded as indistinguishable. High strength barriers concentrate energy, producing potentially destructive pulses of strong ground motion.
 Complexity and robustness
ps pdf
Carlson, JM & Doyle, J (2002) Proc. Nat. Acad. Sci. 99 , 25382545.HOT was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes 1) highly structured, nongerenic, selfdissimilar internal configurations and 2) robust, yet fragile external behavior. HOT claims these are the most important features of complexity and are not accidents of evolution or artifices of engineering design, but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing both on real world examples and also model systems, particularly those from SelfOrganized Criticality (SOC).
 Mutation, specialization, and
hypersensitivity in HOT
ps pdf
Zhou, T, Carlson, JM & Doyle, J (2002) Proc. Nat. Acad. Sci. 99 , 20492054.We introduce a model of evolution in a community, in which individual organisms are represented by percolation lattice models. When an external perturbation impacts an occupied site, it destroys the corresponding connected cluster. The fitness is based on the number of occupied sites which survive. High fitness individuals arise through mutation and natural selection, and are characterized by cellular barrier patterns which prevent large losses in common disturbances. This model shows that HOT, which links complexity to robustness in designed systems, arises naturally through biological mechanisms. While the model represents a severe abstraction biological evolution, the fact that fitness is concrete and quantifiable allows us to isolate the effects associated with different causes in a manner which is difficult in a more realistic setting.
 Design degrees of freedom and
mechanisms for complexity
ps pdf
Reynolds, D, Carlson, JM & Doyle, J (2002) Phys. Rev. E 66, article 016108.We develop a discrete spectrum of percolation forest fire models characterized by increasing design degrees of freedom (DDOF's). DDOF's are tuned to optimize the yield of trees after a single spark. In the limit of a single DDOF, the model is tuned to the critical density. Additional DDOF's allow for increasingly refined spatial patterns, associated with the cellular structures seen in HOT. The spectrum of models provides a clear illustration of the contrast between criticality and HOT, as well as a concrete quantitative example of how a sequence of robustness tradeoffs naturally arises when increasingly complex systems are developed through additional layers of design. Such tradeoffs are familiar in engineering and biology and are a central aspect of complex systems that can be characterized as HOT.
 Dynamics and changing environments in HOT
ps pdf
Zhou, T & Carlson, JM (2001) Phys. Rev. E 62, 31973204.HOT is a mechanism for power laws in complex systems which combines engineering design and biological evolution with the familiar statistical approaches in physics. Once the system, the environment, and the optimization scheme have been specified, the HOT state is fixed and corresponds to the set of measure zero (typically a single point) in the configuration space which minimizes a cost function U. Here we explore the Udependent structures in configuration space which are associated with departures from the optimal state. We introduce dynamics, quantified by an effective temperature T, such that T=0 corresponds to the original HOT state, while infinite T corresponds to completely random configurations. More generally, T defines the range in state space over which fluctuations are likely to be observed. In a fixed environment fluctuations always raise the average cost. However, in a timedependent environment4, mobile configurations can lower the average U because they adjust more efficiently to changes.
 Highly Optimized Tolerance in
epidemic models incorporating local optimization and regrowth
ps pdf
Robert, C, Carlson, JM & Doyle, J (2001) Phys. Rev. E 63 , article 056122.In the context of a coupled map model of population dynamics, which includes the rapid spread of fatal epidemics, we investigate the consequences of two new feaures in HOT, a mechanism which describes how complexity arises in systems which are optimized for robust performance in the presence of a harsh external environment. Specifically, we (1) contrast global and local optimization criteria and (2) investigate the effects of timedependent regrowth. We find that both local and global optimization lead to HOT states, which may differ in their specific layouts, but share many qualitative features. Timedependent regrowth leads to HOT states which deviate from the optimal configurations in the corresponding static models in order to protect the system from slow (or impossible) regrowth which follows the largest losses and extinctions. While the associated map can exhibit complex, chaotic solutions, HOT states are confined to relatively simple dynamical regimes.
 Rupture pulse characterization:
Selfhealing, selfsimilar, expanding solutions in a continuum model of fault dynamics
ps pdf
Nielson, SB & Carlson, JM (2000) Bull. Seismol. Soc. Amer. 90,14801497.We investigate the dynamics of selfhealing rupture pulses on a stressed fault, embedded in a three dimensional scalar medium. A state dependent friction law which incorporates rate weakening acts at the interface. When the system is sufficiently large that the solutions are not influenced by edge effects, we observe three distinct regimes numerically: (1) expanding cracks, (2) expanding pulses, and (3) arresting pulses. Using analytical arguments based on the balance of stress on the fault, we demonstrate that when a persistent pulse exists (regime 2), it expands as it propagates and displays selfsimilarity, akin to the classic crack solution. We define a dimensionless parameter H, which depends on the friction, the prestress, and properties of the medium. Numerical results reveal that H controls the transition between regimes where both crack and pulse solutions are allowed, and the regime where only arresting pulses are possible. The boundary which divides expanding crack and pulse solutions depends on local properties associated with the initiation of rupture. Finally, we extend the investigation of pulse properties to cases with well defined heterogeneities in the prestress. In this case, the pulse width is sensitive to the local variations, expanding or contracting as it runs into low or high stress regions, respectively.
 Influence of friction and fault
geometry on earthquake rupture
ps pdf
Nielson, SB, Carlson, JM & Olsen, KB (2000) J. Geophys. Res.Solid Earth 105, 60696088.We investigate the impact of variations in the friction and geometry on models of fault dynamics. We focus primarily on a three dimensional continuum model with scalar displacements. Slip occurs on an embedded two dimensional planar interface. Friction is characterized by a two parameter rate and state law, incorporating a characteristic length for weakening, a characteristic time for healing, and a velocity weakening steady state. As the friction parameters are varied there is a crossover from narrow, selfhealing slip pulses, to cracklike solutions that heal in response to edge effects. For repeated ruptures the cracklike regime exhibits periodic or aperiodic systemwide events. The selfhealing regime exhibits dynamical complexity and a broad distribution of rupture areas. The behavior can also change from periodicity or quasiperiodicity to dynamical complexity as the total fault size or the length to width ratio is increased. Our results for the continuum model agree qualitatively with analogous results obtained for a one dimensional BurridgeKnopoff model in which radiation effects are approximated by viscous dissipation.
 HOT: Robustness and design in complex
systems
ps pdf
Carlson, JM & Doyle, J (2000) Phys. Rev. Lett. 84, 25292532.We introduce HOT, a mechanism that connects evolving structure and power laws in interconnected systems. HOT systems arise, e.g. in biology and engineeringr, where design and evolution create complex systems sharing common features, including (1) high efficiency, performance, and robustness to designedfor uncertainties, (2) hypersentivity to design flaws and unanticipated perturbations, (3) nongeneric, specialized, structured configurations, and (4) power laws. We introduce HOT states in the context of percolation, and contrast properties of the high density HOT states with random configurations near the critical point. While both cases exhibit power laws, only HOT states display properties (13) associated with design and evolution.
 Power laws, HOT, and generalized
source coding
ps pdf
Doyle, J & Carlson, JM (2000) Phys. Rev. Lett. 84, 56565659.We introduce a family of robust design problems for complex systems in uncertain environments which are based on tradeoffs between resource allocations and losses. Optimized solutions yield the "robust, yet fragile" feature of HOT and exhibit power law tails in the distributions of events for all but the special case of Shannon coding for data compression. In addition to data compression, we construct specific solutions for world wide web traffic and forest fires, and obtain excellent agreement with measured data.
 HOT: A mechanism for power laws in
designed systems
ps pdf
Carlson, JM & Doyle, J (1999) Phys. Rev. E 60, 14121427.We introduce a mechanism for generating power law distributions, referred to as HOT, which is motivated by biological organisms and advanced engineering technologies. Our focus is on systems which are optimized, either through natural selection or engineering design, to provide robust performance despite uncertain environments. We suggest that power laws in these systems are due to tradeoffs between yield, cost of resources, and tolerance to risks. These tradeoffs lead to highly optimized designs that allow for occasionally large events. We investigate the mechanisms in the context of percolation and sand pile models in order to emphasize the sharp contrasts between HOT and selforganized criticality (SOC), which has been widely suggested as the origin for power laws in complex systems. Like SOC, HOT produces power laws. However, compared to SOC, HOT states exist for densities which are higher than the criticaly desnity, and the power laws are not restricted to special values of the density. The characteristic feature of HOT systems include: (1) high efficiency, performance, and robustness to designedfor uncertainties, (2) hypersensitivity to design flaws and unanticipated perturbations, (3) nongeneric, specialized, structured configurations, and (4) power laws. The first three of these are in contrast to the traditional hallmarks of criticality, and are obtained by simply adding the element of design to percolation and sand pile models, which completely changes their characteristics.
 Bifurcations from steady sliding
to slip in boundary lubrication
ps pdf
Batista, AA & Carlson, JM (1998) Phys. Rev. E 57, 49864996.We explore the nature of the transitions between stick slip and steady sliding in models for boundary lubrication. The models are based on the rate and state approach which has been very successful in characterizing the behavior of dry interfaces [A. Ruina, J. Geophys. Res. 88, 10 359 (1983)]. Our models capture the key distinguishing features associated with surfaces separated by a few molecular layers of lubricant. Here we find that the transition from steady sliding to stick slip is typically discontinuous and sometimes hysteretic. When hysteresis is observed it is associated with a subcritical Hopf bifurcation. In either case, we observe a sudden and discontinuous onset in the amplitude of oscillations at the bifurcation point.
 Constitutive relation for the
friction between lubricated surfaces
ps pdf
Carlson, JM & Batista, AA (1996) Phys. Rev. E 53, 41534165.Motivated by recent experiments and numerical results, we propose a constitutive relation to describe the friction beween two surfaces separated by an atomically thin layer of lubricant molecules. Our phenomenological approach involves the development of a rate and state law to describe the macroscopic frictional properties of the system, in a manner similar to that which has been proposed previously by Ruina [J. Geophys. Res. 88, 10 359(1983)]] for the solid on solid case. In our case, the state variable is interpreted in terms of the shear melting of the lubricant, and the constitutive relation captures some of the primary experimental differences between the dry and lubricated systems.
Correlated amino acid substitution algorithms attempt to discover groups of residues that cofluctuate due to either structural or functional constraints. Although these algorithms could inform both ab initio protein folding calculations and evolutionary studies, their utility for these purposes has been hindered by a lack of confidence in their predictions due to hard to control sources of error. To complicate matters further, naive users are confronted with a multitude of methods to choose from, in addition to the mechanics of assembling and pruning a dataset. We first introduce a new pair scoring method, called ZNMI (Zscoredproduct Normalized Mutual Information), which drastically improves the performance of mutual information for cofluctuating residue prediction. Second and more important, we recast the process of finding coevolving residues in proteins as a dataprocessing pipeline inspired by the medical imaging literature. We construct an ensemble of alignment partitions that can be used in a crossvalidation scheme to assess the effects of choices made during the procedure on the resulting predictions. This pipeline sensitivity study gives a measure of reproducibility (how similar are the predictions given perturbations to the pipeline?) and accuracy (are residue pairs with large couplings on average close in tertiary structure?). We choose a handful of published methods, along with ZNMI, and compare their reproducibility and accuracy on three diverse protein families. We find that (i) of the algorithms tested, while none appear to be both highly reproducible and accurate, ZNMI is one of the most accurate by far and (ii) while users should be wary of predictions drawn from a single alignment, considering an ensemble of subalignments can help to determine both highly accurate and reproducible couplings. Our crossvalidation approach should be of interest both to developers and end users of algorithms that try to detect correlated amino acid substitutions.
Jonathan Smallwood, Kevin S. Brown, Christine Tipper, Barry Giesbrecht1, Michael S. Franklin, Michael D. Mrazek, Jean M. Carlson & Jonathan W. Schooler, PLoS ONE 6(3): e18298. doi:10.1371/journal.pone.0018298 (2011).
Accumulating evidence suggests that the brain can efficiently process both external and internal information. The processing of internal information is a distinct “offline” cognitive mode that requires not only spontaneously generated mental activity; it has also been hypothesized to require a decoupling of attention from perception in order to separate competing streams of internal and external information. This process of decoupling is potentially adaptive because it could prevent unimportant external events from disrupting an internal train of thought. Here, we use measurements of pupil diameter (PD) to provide concrete evidence for the role of decoupling during spontaneous cognitive activity. First, during periods conducive to offline thought but not during periods of task focus, PD exhibited spontaneous activity decoupled from task events. Second, periods requiring external task focus were characterized by large task evoked changes in PD; in contrast, encoding failures were preceded by episodes of high spontaneous baseline PD activity. Finally, high spontaneous PD activity also occurred prior to only the slowest 20% of correct responses, suggesting high baseline PD indexes a distinct mode of cognitive functioning. Together, these data are consistent with the decoupling hypothesis, which suggests that the capacity for spontaneous cognitive activity depends upon minimizing disruptions from the external world.
We use shear transformation zone (STZ) theory to develop a deformation map for amorphous solids as a function of the imposed shear rate and initial material preparation. The STZ formulation incorporates recent simulation results [ T. K. Haxton and A. J. Liu Phys. Rev. Lett. 99 195701 (2007)] showing that the steady state effective temperature is rate dependent. The resulting model predicts a wide range of deformation behavior as a function of the initial conditions, including homogeneous deformation, broad shear bands, extremely thin shear bands, and the onset of material failure. In particular, the STZ model predicts homogeneous deformation for shorter quench times and lower strain rates, and inhomogeneous deformation for longer quench times and higher strain rates. The location of the transition between homogeneous and inhomogeneous flow on the deformation map is determined in part by the steady state effective temperature, which is likely material dependent. This model also suggests that material failure occurs due to a runaway feedback between shear heating and the local disorder, and provides an explanation for the thickness of shear bands near the onset of material failure. We find that this model, which resolves dynamics within a sheared material interface, predicts that the stress weakens with strain much more rapidly than a similar model which uses a single state variable to specify internal dynamics on the interface.
Wholebrain network analysis of diffusion imaging tractography data is an important new tool for quantification of differential connectivity patterns across individuals and between groups. Here we investigate both the conservation of network architectural properties across methodological variation and the reproducibility of individual architecture across multiple scanning sessions. Diffusion spectrum imaging (DSI) and diffusion tensor imaging (DTI) data were both acquired in triplicate from a cohort of healthy young adults. Deterministic tractography was performed on each dataset and interregional connectivity matrices were then derived by applying each of three widely used wholebrain parcellation schemes over a range of spatial resolutions. Across acquisitions and preprocessing streams, anatomical brain networks were found to be sparsely connected, hierarchical, and assortative. They also displayed signatures of topophysical interdependence such as Rentian scaling. Basic connectivity properties and several graph metrics consistently displayed high reproducibility and low variability in both DSI and DTI networks. The relative increased sensitivity of DSI to complex fiber configurations was evident in increased tract counts and network density compared with DTI. In combination, this pattern of results shows that network analysis of human white matter connectivity provides sensitive and temporally stable topological and physical estimates of individual cortical structure across multiple spatial scales.
Bassett, D. S., Wymbs, N., Porter, M. A., Mucha, P., Carlson, J. M., and S. A. Grafton, PNAS 108(18):76416 (2011).
Human learning is a complex phenomenon requiring flexibility to adapt existing brain function and precision in selecting new neurophysiological activities to drive desired behavior. These two attributes—flexibility and selection—must operate over multiple temporal scales as performance of a skill changes from being slow and challenging to being fast and automatic. Such selective adaptability is naturally provided by modular structure, which plays a critical role in evolution, development, and optimal network function. Using functional connectivity measurements of brain activity acquired from initial training through mastery of a simple motor skill, we investigate the role of modularity in human learning by identifying dynamic changes of modular organization spanning multiple temporal scales. Our results indicate that flexibility, which we measure by the allegiance of nodes to modules, in one experimental session predicts the relative amount of learning in a future session. We also develop a general statistical framework for the identification of modular architectures in evolving systems, which is broadly applicable to disciplines where network adaptability is crucial to the understanding of system performance.
Petrovic, N., and P. Oh, MNRAS, 413, 2103–2120, (2011).
21cm observations have the potential to revolutionize our understanding of the highredshift universe. Whilst extremely bright radio continuum foregrounds exist at these frequencies, their spectral smoothness can be exploited to allow efficient fore ground subtraction. It is wellknown that–regardless of other instrumental effects–this removes power on scales comparable to the survey bandwidth. We investigate associ ated systematic biases.We show that removing lineofsight fluctuations on large scales aliases into suppression of the 3D power spectrum across a broad range of scales. This bias can be dealt with by correctly marginalizing over small wavenumbers in the 1D power spectrum; however, the unbiased estimator will have unavoidably larger variance. We also show that Gaussian realizations of the power spectrum permit accurate and extremely rapid MonteCarlo simulations for error analysis; repeated realizations of the fully nonGaussian field are unnecessary. We perform MonteCarlo maximumlikelihood simulations of foreground removal which yield unbiased, minimum variance estimates of the power spectrum in agreement with Fisher matrix estimates. Foreground removal also distorts the 21cm PDF, reducing the contrast between neutral and ionized regions, with potentially serious consequences for efforts to extract information from the PDF. We show that it is the subtraction of largescales modes which is responsible for this distortion, and that it is less severe in the earlier stages of reionization. It can be reduced by using larger bandwidths. In the late stages of reionization, identification of the largest ionized regions (which consist of foreground emission only) provides calibration points which potentially allow recovery of largescale modes. Finally, we also show that: (i) the broad frequency response of synchrotron and freefree emission will smear out any features in the electron momentum distribution and ensure spectrally smooth foregrounds; (ii) extragalactic radio recombination lines should be negligible foregrounds.
Challenges associated with the allocation of limited resources to mitigate the impact of natural disasters inspire fundamentally new theoretical questions for dynamic decision making in coupled human and natural systems. Wildfires are one of several types of disaster phenomena, including oil spills and disease epidemics, where (1) the disaster evolves on the same timescale as the response effort, and (2) delays in response can lead to increased disaster severity and thus greater demand for resources. We introduce a minimal stochastic process to represent wildfire progression that nonetheless accurately captures the heavy tailed statistical distribution of fire sizes observed in nature. We then couple this model for fire spread to a series of response models that isolate fundamental tradeoffs both in the strength and timing of response and also in division of limited resources across multiple competing suppression efforts. Using this framework, we compute optimal strategies for decision making scenarios that arise in fire response policy.
Hermundstad AM, Brown KS, Bassett DS, Carlson JM, PLoS Comput Biol 7(6): e1002063. doi:10.1371/journal.pcbi.1002063 (2011).
The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.
M.L. Manning, B. Bamieh, and J.M. Carlson, Journal of Computational Physics, arXiv:0705.1542v2 [physics.compph] (2007).
We describe a general framework for avoiding spurious eigenvalues  unphysical unstable eigenvalues that often occur in hydrodynamic stability problems. In two example problems, we show that when system stability is analyzed numerically using descriptor notation, spurious eigenvalues are eliminated. Descriptor notation is a generalized eigenvalue formulation for dierentialalgebraic equations that explicitly retains algebraic constraints. We propose that spurious eigenvalues are likely to occur when algebraic constraints are used to analytically reduce the number of independent variables in a dierentialalgebraic system of equations before the system is approximated numerically. In contrast, the simple and easily generalizable descriptor framework simultaneously solves the dierential equations and algebraic constraints and is wellsuited to stability analysis in these systems.
Exposure to infectious diseases has an unexpected benefit of inhibiting autoimmune diseases and allergies. This is one of many fundamental fitness tradeoffs associated with immune system architecture. The immune system attacks pathogens, but also may (inappropriately) attack the host. Exposure to pathogens can suppress the deleterious response, at the price of illness and the decay of immunity to previous diseases. This “hygiene hypothesis” has been associated with several possible underlying biological mechanisms. This study focuses on physiological constraints that lead to competition for survival between immune system cell types. Competition maintains a relatively constant total number of cells within each niche. The constraint implies that adding cells conferring new immunity requires loss (passive attrition) of some cells conferring previous immunities. We consider passive attrition as a mechanism to prevent the initial proliferation of autoreactive cells, thus preventing autoimmune disease. We see that this protection is a general property of homeostatic regulation and we look specifically at both the IL15 and IL7 regulated niches to make quantitative predictions using a mathematical model. This mathematical model yields insight into the dynamics of the “Hygiene Hypothesis,” and makes quantitative predictions for experiments testing the ability of passive attrition to suppress immune system disorders. The model also makes a prediction of an anticorrelation between prevalence of immune system disorders and passive attrition rates.
A primary goal in seismology is to identify constraints arising from the small scale physics of friction and fracture that can provide bounds on seismic hazard and ground motion at the fault scale. Here we review the multiscale earthquake rupture problem and describe a physical model for the deformation of amorphous materials such as granular fault gouge. The model is based on Shear Transformation Zone (STZ) Theory, a microscopic model for plastic deformation. STZ theory ties fault weakening to the evolution of an effective temperature, which quantifies configurational disorder and captures the spontaneous formation and growth of narrow shear bands in the fault gouge.
Daub, E. G., thesis (2009).
The dynamic earthquake problem spans a broad range of length scales, from microscopic grain contacts through faults that are hundreds of kilometers long. A major goal of dynamic earthquake modeling is to develop friction laws that capture the small scale physics and that can also be used to model fault scale rupture. However, friction laws used in studying earthquake rupture are often simply fits to data, and give little physical insight into the rupture process. The goal of this work is to develop a model for the deformation of amorphous materials such as granular fault gouge, and to investigate the dynamics of instabilities at larger scales. The model is based on Shear Transformation Zone (STZ) Theory, a microscopic physical model for plastic deformation in dense amorphous materials such as fault gouge, granular materials, glasses, foams, and colloids. STZ Theory captures fracture and deformation features that are observed in numerical simulations, and remains tractable for modeling friction at larger scales. STZ Theory ties fault weakening to the evolution of an eective temperature, which quanties the congurational disorder in the gouge and serves as the dynamic state variable in STZ Theory.
STZ Theory predicts logarithmic rate dependence and that the length scale for frictional evolution increases with increasing average strain rate, which are observed in laboratory experiments. Additionally, STZ Theory captures the spontaneous formation and growth of narrow shear bands in the fault gouge. Shear bands within a layer of gouge are observed in many studies of faulting, which indicates that resolving the dynamics of shear banding is important for capturing the small scale physics during earthquake slip.
At the scale of frictional interfaces, we investigate the role of strain localization for stickslip instabilities in an elastic block slider system. We perform a linear stability analysis to predict the critical value of the spring stiffness when steady sliding becomes unstable, and verify our results through numerical integration. We and that when a shear band forms, steady sliding becomes unstable at a larger spring stiffness.
We also investigate the implications of STZ Theory and strain localization in dynamic earthquake simulations. We compare STZ Theory without strain localization, DieterichRuina (DR) friction, and linear slipweakening (SW). The dynamic rupture governed by STZ Theory accelerates more rapidly to the limiting wave speed, exhibits a decreased peak slip rate, and transitions to supershear rupture at a lower initial shear stress than equivalent ruptures with DR or SW friction.
For dynamic ruptures where a shear band does form, strain localization alters fault behavior because localization is a mechanism for dynamic weakening. The dynamic weakening of strain localization increases the slip rate during rupture, and also increases the stress drop. We also show that strain localization occurs below seismogenic depths where constitutive properties are rate strengthening due to slip propagating down dip from the seismogenic zone. Our results indicate that the small scale physics occurring within the gouge can have a large scale impact on the dynamics of friction and the propagation of slip on earthquake faults.
Sean P. Stromberg, thesis (2009).
The adaptive immune system can be viewed as a complex system, which adapts, over time, to reflect the history of infections experienced by the organism. Understanding its operation requires viewing it in terms of tradeoffs under constraints and evolutionary history. It typically displays "robust, yet fragile" behavior, meaning common tasks are robust to small changes but novel threats or changes in environment can have dire consequences. In this dissertation we use mechanistic models to study several biological processes: the immune response, the homeostasis of cells in the lymphatic system, and the process that normally prevents autoreactive cells from entering the lymphatic system. Using these models we then study the effects of these processes interacting. We show that the mechanisms that regulate the numbers of cells in the immune system, in conjunction with the immune response, can act to suppress autoreactive cells from proliferating, thus showing quantitatively how pathogenic infections can suppress autoimmune disease. We also show that over long periods of time this same effect can thin the repertoire of cells that defend against novel threats, leading to an age correlated vulnerability. This vulnerability is shown to be a consequence of system dynamics, not due to degradation of immune system components with age. Finally, modeling a specific tolerance mechanism that normally prevents autoimmune disease, in conjunction with models of the immune response and homeostasis we look at the consequences of the immune system mistakenly incorporating pathogenic molecules into its tolerizing mechanisms. The signature of this dynamic matches closely that of the dengue virus system.
A. M. Hermundstad, E. G. Daub, and J. M. Carlson, J. Geophys. Res., 115, B06320, doi:10.1029/2009JB006960 (2010).
We quantify the energy dissipated to heat and to local disorder in a sheared layer of granular fault gouge. Local disorder is modeled using Shear Transformation Zone (STZ) Theory, a continuum model of nonaffine deformation in amorphous solids that resolves spontaneous localization of strain. Strain localization decreases the total energy dissipated during slip. In addition, a fraction of this energy is dissipated to increasing local disorder as the material is sheared, thereby decreasing the amount of energy dissipated as thermal heat. We quantify the heat dissipated per unit area as a function of total slip in the presence and absence of strain localization and test the parameter dependence of these calculations. We find that less heat is dissipated per unit area compared to results obtained using a traditional heuristic energy partition.
We study the impact of strain localization on the stability of frictional slipping in dense amorphous materials. We model the material using Shear Transformation Zone (STZ) Theory, a continuum approximation for plastic deformation in amorphous solids. In the STZ model, the internal state is quantiﬁed by an elective disorder temperature, and the elective temperature dynamics capture the spontaneous localization of strain. We study the elect of strain localization on stickslip instabilities by coupling the STZ model to a noninertial spring slider system. We perform a linear stability analysis to generate a phase diagram that connects the small scale physics of strain localization to the macroscopic stability of sliding. Our calculations determine the values of spring stillness and driving velocity where steady sliding becomes unstable, and we conﬁrm our results through numerical integration. We investigate both homogeneous deformation, where no shear band forms, and localized deformation, where a narrow shear band spontaneously forms and accommodates all of the deformation. Our results show that at a given velocity, strain localization leads to unstable frictional sliding at a much larger spring stillness compared to homogeneous deformation, and that localized deformation cannot be approximated by a homogeneous model with a narrower material. We also ﬁnd that strain localization provides a physical mechanism for irregular stickslip cycles in certain parameter ranges. Our results quantitatively connect the internal physics of deformation in amorphous materials to the larger scale frictional dynamics of stickslip.
Fire is a worldwide phenomenon that appears in the geological record soon after the appearance of terrestrial plants. Fire influences global ecosystem patterns and processes, including vegetation distribution and structure, the carbon cycle, and climate. Although humans and fire have always coexisted, our capacity to manage fire remains imperfect and may become more difficult in the future as climate change alters fire regimes. This risk is difficult to assess, however, because fires are still poorly represented in global models. Here, we discuss some of the most important issues involved in developing a better understanding of the role of fire in the Earth system.
Pathogenic infection typically results in an immune response and a long lived population of memory cells. These memory cells confer immunity, so that future infections with the same pathogen often do not cause illness. The memory cells from the primary infection may also yield protection from some other diseases through crossreactivity. In rare cases the primary infection can result in a vulnerability to heterologous secondary infections. We introduce pathogen induced tolerance (PIT), a mechanism which generates a vulnerability through a primary infection. In PIT antigen from the primary infection takes part in negative selection. The characteristics of PIT are (1) a normal primary infection, (2) long term immunity to the primary agent, (3) short term crossreactive immunity, (4) a decay to vulnerability for some degrees of crossreactivity. The PIT mechanism develops the vulnerability most rapidly on systems with rapid influx of new lymphocytes (i.e. children). The PIT signature behavior is seen to match the characteristics of the dengue virus system.
We develop two techniques to solve for the spatiotemporal neural activity patterns using Electroencephalogram (EEG) and Functional Magnetic Resonance Imaging (fMRI) data. EEGonly source localization is an inherently underconstrained problem, whereas fMRI by itself suffers from poor temporal resolution. Combining the two modalities transforms source localization into an overconstrained problem, and produces a solution with the high temporal resolution of EEG and the high spatial resolution of fMRI. Our first method uses fMRI to regularize the EEG solution, while our second method uses Independent Components Analysis (ICA) and realistic models of Blood OxygenLevel Dependent (BOLD) signal to relate the EEG and fMRI data. The second method allows us to treat the fMRI and EEG data on equal footing by fitting simultaneously a solution to both data types. Both techniques avoid the need for ad hoc assumptions about the distribution of neural activity, although ultimately the second method provides more accurate inverse solutions.
We present a sheartransformationzone (STZ)based analysis of numerical simulations by Haxton and Liu [Phys. Rev. Lett. 99, 195701 (2007)]. The extensive Haxton and Liu (HL) data sharply test the basic assumptions of the STZ theory, especially the central role played by the effective disorder temperature as a dynamical state variable. We find that the theory survives these tests, and that the HL data provide important and interesting constraints on some of its specific ingredients. Our most surprising conclusion is that, when driven at various constant shear rates in the lowtemperature glassy state, the HL system exhibits a classic glass transition, including superArrhenius behavior, as a function of the effective temperature.
Despite inherent difficulties, longterm simulation modeling is one of the few approaches available for understanding fire regime sensitivities to different environmental factors. This paper is the second in a series that documents a new rasterbased model of fire growth, HFire, which incorporates the physical principles of fire spread (Rothermal, 1972) and is also capable of extended (e.g., multicentury) simulations of repeated wildfires and vegetation recovery. Here we give a basic description of longterm HFire implementation for a shrublanddominated landscape in southern California, a study area surrounded by urban development and prone to large, intense wildfires. We examined fire regime sensitivities to different input parameters, namely ignition frequency, fire suppression effectiveness (as measured by a stopping rule based on fire rate of spread), and extreme fire weather event frequency. Modeled outputs consisted of 500yr series of spatially explicit fire patterns, and we analyzed changes in fire size distributions, landscape patterns, and several other descriptive measures to characterize a fire regime (e.g., fire cycle and rates of ignition success). Our findings, which are generally consistent with other analyses of fire regime dynamics, include a relative insensitivity to ignition rates and a strong influence of extreme fire weather events. Although there are several key areas for improvement, HFire is capable of efficiently simulating realistic fire regimes over very long time scales, allowing for physically based investigations of fire regime dynamics in the future.