|
|
|
|
Theoretical physics tends to place emphasis on isolated systems. In the instances when it doesn't, the internal driving involved usually arises from thermal or quantum fluctuations. Thermal fluctuations may intuitively be considered mathematically as the fluctuations (uncertainty) due to the fact that we only really observe coarse grained variables (i.e. we don't track individual trajectories of the underlying deterministic, microscopic dynamics). This separation between the coarse grained variables and the remaining variables that cause the fluctuations may be viewed as a manifestation of an approximate system-environment split. Physical phenomena that exemplify this approximate system-environment are the existence of thermodynamic phenomena (i.e. the existence of the thermodynamic limit -- e.g. see work by Ruelle) and quantum-classical transition where decoherence causes quantum probability amplitudes to become classical probabilities (e.g. see work by Hartle and Gell-Mann or W.H. Zurek). In each example, there is a microscopic description that has an enormous number of degrees of freedom and a reduced order description that accounts for its macroscopic behavior. Thus, central to a theoretical understanding and description of such phenomena, one must know how to appropriately construct reduced order models. Two particular methods that are predominantly used within the physics community are projection operator techniques and renormalization group techniques. Some of the founding work on projection operator techniques came out of those studying nonequilibrium phenomena (e.g. see work by Mori or Zwanzig). The founding idea of this approach is to project out the degrees of freedom that are considered to be the "environment". This is realized by the fact that since the theory is statistical in nature, one then integrates out the environmental degrees of freedom from the probability density. The theory of Optimal Prediction seems (e.g. see the work of Chorin) to use and extend some of these ideas. The renormalization group (RG) was initially developed by individuals who were investigating problems arising in condensed matter systems and many-body physics. (e.g. see work by Kadanoff, Wilson, M.E. Fisher, etc.) The main idea of the RG is that one coarse grains the microscopic system in a particular way (usually by just locally rescaling) and then determine what the new effective field theory is for the coarse grained variables. For instance, within a Lagrangian framework, after coarse graining the system the desired objective is to discern the form of the new effective coarse grained Lagrangian. In this case, the coarse graining generates a map from the couplings constants of the original Lagrangian to the coupling constants of the coarse grained Lagrangian. Consequently, the process of coarse graining is termed the renormalization group transformation. Since a RG transformation may not be invertible, the evolution of the coupling is governed by a semi-group. The aspect of this approach that makes it particularly useful is that often after successive RG transformations, the Lagrangian coupling constants get mapped to a finite (small) dimensional subspace. As we see above, physical methods of constructing reduced order models effectively coarse grain the underlying microscopic theory. However, the great deficiency of these methods leads us to the million dollar question: What is the most appropriate or best way to coarse grain our system? Both of the methods mentioned above, take a method of coarse graining and runs with it. RG doesn't tell us ahead of time how to coarse grain -- it just provides us with, after the fact, a binary response of whether the coarse graining was appropriate or not. As a result of this deficiency, there are a multitude of implementations of the RG. Some implementations that especially deserve mention are Wilsonian RG and the density matrix renormalization group (DMRG -- e.g. see work by S. White). A primary drawback of how the RG is usually used is that it tends only to be applied to homogenous systems. Consequently, the primary RG transformation of choice is local rescalings. For homogeneous systems, this choice makes perfect sense by symmetry considerations. However, it provides one with little intuition when investigating inhomogeneous systems (especially since local rescalings are essentially a set of measure zero in the set of all possible RG transformations). Is progress in theoretical physics now stuck due to this obstacle? (forgive me for such a wildly dramatic exaggeration) Fortunately, what comes to the rescue is the apparatus developed by a discipline where the investigation of open systems is unavoidable. In particular, I am referring to the discipline of control theory. (a particularly delightful exposition on control theory is the text by Dullerud and Paganini) By necessity, control theory has to judiciously account for where uncertainty comes from. For control systems, uncertainty may arise from internal fluctuations (as physicists deal with), external error and noise, and model uncertaintly. One may envision a control system to roughly be made up of an input, a plant, an output, sensors, a controller, and actuators. The plant is a physical system that one wishes to have some set of desired features or behaviors. The input might be a signal that one wishes to track. Alternatively, it might be some disturbance that we want our combined plant/controller system to reject. The output is some set of data that is measured from the plant (usually detected by the sensors). The sensors then measure particular physical quantities from the plant and then passes that information on to the controller. The controller then is responsible for ensuring that the plant maintains its desired features or behaviors. It does this by sending instructions to the actuators that directly interface with the system and elicit the desired response. For instance, the controller may be a computer (with some particular performance objective) and the actuator, a current injector. Though the above is a poor caricatur, it at least provides a flavor of what constitutes a control system. It is quite remarkable that a controller may be designed to achieve particular performance objectives even despite the existence of plant uncertainty and the real engineering problem of imperfect sensors and actuators. An important part of control theory is system identification and realization. That is, given only the input into and the output out of the plant, how do you model the system so that you can approximate it well enough to control it. Furthermore, can you identify the relevant parameters for your model "online". Along these lines, it is often useful to be able to characterize what quantities are most observable and controllable. Furthermore, from such data, can you construct reduced order models that will allow you to (robustly) control those state variables? As it turns out, there are actually preferred "coordinates" (state variables) that one may use in order construct reduced order models of the control system (at least for linear systems). These "coordinates" are essentially just the "right" coarse grained variables that one should be looking at. In other words, control theory provides one with a partial answer to the million dollar question. |