photo by Gino Santa Maria/123RF.com

Reimagining Physics — Part 7

Harrison Crecraft

--

The Reality of Time and Change — Part 1
Physics’ Timeless Universe — Part 2
A Thermocontextual Perspective — Part 3
What is Time? — Part 4
Wavefunction Collapse and Symmetry-Breaking — Part 5
Entanglement and Nonlocality — Part 6
The Arrow of Functional Complexity — Part 7

The Arrow of Functional Complexity

The universe evolved from an initially homogeneous and featureless state to the rich complexity of molecules, planets, and galaxies that we see today. In our own corner of the universe, simple chemicals spontaneously organized themselves into self-replicating forms within as little as 0.1 billion years after formation of the earth’s oceans, and life took off. We see the arrow of evolution in the fossil record and in the advances of civilization, technology, and economies. Despite its overwhelming presence, however, there is no fundamental explanation for the well documented tendency of open systems, sustained by external energy sources, to spontaneously organize complex structures.

In previous parts, I focused on states and their transitions. In Part 7, I shift from states to non-equilibrium systems and their dissipative processes. The TCI establishes a physical foundation for nonequilibrium systems and their process of self-organization into increasingly complex dissipative structures.

Dissipative Processes

The TCI [1] establishes the Second Law of thermodynamics and dissipative processes as fundamental. The Second Law says that nonequilibrium systems dissipate exergy, but it says nothing about the process of dissipation. Dissipation can be much more than the dissipation of exergy directly to ambient heat; it is the driver for all change, constructive as well as destructive. Nature is replete with examples in which systems become increasingly organized and evolve toward higher organization.

Figure 7-1 shows the conceptual model for a stationary dissipative system. The model assumes a stationary environment, but it is nonequilibrium, and it includes one or more sources of exergy or high-exergy components. A system with stationary sources and environment will converge over time to a stationary process of dissipation. The system is stationary, but it is not static, and it is not an actual state as its components are constantly flowing through the system and irreversibly changing. We refer to a stationary dissipative system as a homeostate. A homeostate’s instantaneous properties can fluctuate or cycle, but their time-averaged values are constant.

Figure 7–1. Homeostate Model. A system’s surroundings includes exergy source(s), either direct (e.g. sunlight) or indirect, as high-exergy inputs. The system dissipates its exergy supply and outputs ambient heat, Q (squiggly lines). Ambient components in the surroundings are also freely available to the system (smooth lines). At homeostasis, time-averaged inputs of materials and energy equal outputs.

Near equilibrium, energy flows are proportional to gradients. Thermal conductivity, for example, is defined by the constant of proportionality between conductive heat flow to a temperature gradient. Linearity defines the near-equilibrium regime. A near-equilibrium system converges to a unique steady-state.

Far from equilibrium, linearity breaks down. At a critical temperature gradient, for example, heat flow dramatically increases as a fluid spontaneously reorganizes itself from conduction to convection. Non-linearity can allow multiple dissipative solutions and multiple homeostates, all consistent with the system’s boundary constraints and conservation laws.

We can express a homeostate as a network of links, pods, and dissipative nodes. Figure 7–2 shows the dissipative network for the Brusselator reaction. The Brusselator is a theoretical model for an oscillating chemical system. It has two external sources, one for component 1 in state A and one for component 2 in state B. Circles denote dissipative nodes, in which components transition from one state to another. Component 1 has two intermediate internal states, X and Y (horizontal lines). Component 2 transitions directly from the source state (B) to the surroundings (state D) with no intermediate internal state.

Figure 7–2. The Brusselator Network Model. Links are the pathways for flows of exergy (wavy line) and components (straight lines) from external sources, through the system, and back to the surroundings. Pods (cylinders) provide storage capacity to accommodate fluctuations. Nodes (circles) represent elementary dissipative transitions. States of components are designated by large letters. States higher on the diagram have higher exergy density. The Brusselator comprises four reaction steps: R1: External source A → X; R2: Y + 2X → 3X; R3: External source B + X → Y + D (discharge); and R4: X → E (discharge). The arrowhead expressions show the reaction rates.

The Brusselator has four dissipative nodes (circles in Figure 7–2). Reactions R1, R2, and R4 (described in the caption) are exergonic transitions of component 1. An exergonic transition is a spontaneous transition to lower exergy (downward arrows in the figure). R3 is a coupled reaction, involving two separate transitions. Node 3B is an exergonic transition for component 2 from state B to state D. The component’s exergy is only partially dissipated, however. Some of the exergy is transferred to node 3A via an exergy link (wavy arrow). Node 3A is an endergonic transition. An endergonic node utilizes exergy to do work of lifting low-exergy input “uphill” to high-exergy output. The utilization of sunlight by plants to convert low-exergy water+carbon dioxide to high-exergy sugar+oxygen is a familiar example.

The Brusselator’s reaction rates, based on simple reaction rate theory and simplifying assumptions, are shown by the arrowhead expressions in Figure 7–2. At steady state, the concentrations of X and Y are fixed (triangle in Figure 7–3). For B greater than 1+A², the steady state homeostate is unstable. Any perturbation sends the system on a transient path that converges to a periodic homeostate, in which the concentrations of X and Y cycle around a stationary trajectory (Figure 7–3). The Brusselator is an example of a far-from-equilibrium system having two homeostates consistent with its boundary constaints: a steady-state homeostate and a periodic homeostate.

Figure 7–3. Brusselator Homeostates

The Constructive Power of Dissipation

Lord Kelvin recognized the constructive power of dissipation in an article he wrote in 1862 [2]. He began by describing heat death, when all directed activity ceases, as the inevitable end-result of dissipation within a finite universe. He then proceeded to express a much deeper and overlooked idea. Backing off on the inevitability of heat death, he continued that the universe is in a state of “endless progress…involving the transformation of potential energy into palpable motion and hence [then] into heat.” In essence, he asserted that a source of exergy tends to defer dissipation by first utilizing exergy for palpable work, before dissipating it to heat.

When Lord Kelvin stated this idea, classical mechanics was deeply entrenched in physical thought. Kelvin’s idea was incompatible with classical mechanics, and it was largely ignored. His idea is fully compatible with the TCI, however, and we formalize his insight:

Kelvin Selection Principle: Of the multiple paths by which a system can take exergy from a source and dissipate it to the surroundings, the path that maximizes the system’s rate of internal work is the most stable.

A system’s internal work is contextually defined by measurable increases in exergy. We can, for example, measure the increase of a component’s exergy as an endergonic node pumps the component to a higher specific exergy.

The Kelvin Selection Principle (KSP) is analogous to the Second Law of thermodynamics, but whereas the Second Law describes the selection and relative stability of states based on a lower level of exergy, the KSP describes the selection and relative stabilities of dissipative processes based on a higher rate of internal work. The selection of lower-exergy states defines the thermodynamic arrow of time. The selection of higher rates of internal work defines the evolutionary arrow of time.

A simple illustration of the KSP is the stability of convection over conduction. Heat added to a convecting liquid does work of thermal expansion of the fluid to maintain buoyancy gradients. This is the internal work necessary to create and sustain convective flow. Heat added to a static fluid, in contrast, is dissipated by conductive heat flow, without doing any measurable work. The KSP therefore says that if boundary constraints allow both convection and conduction, the fluid will spontaneously transition from random molecular motions to organized convective flow. This is invariably observed.

A more revealing illustration of the KSP is the Miller-Urey experiment [3]. Stanley Miller and Harold Urey took a mixture of simple gases and stimulated it with electrical sparks. When they analyzed the gas mixture afterward, they found that the gas molecules had reorganized themselves into a variety of amino acids. The gas mixture started in a near-equilibrium state of low-exergy. The sparks added exergy to the gas mixture, but instead of dissipating the exergy directly to heat, it deferred dissipation and utilized exergy to do work of creating high-exergy amino acids.

We can verify that the stability of the Brusselator’s periodic homeostate (limit cycle in Figure 7-3) is also related to its higher rate of internal work. The steady-state Brusselator has fixed concentrations of X and Y, but they fluctuate in the oscillating mode. The work of increasing the concentrations and exergy of the system’s component states (X and Y) during their accumulation phases is a measure of the oscillating homeostate’s positive rate of internal work. The KSP asserts that the oscillating mode, with its positive rate of internal work, is more stable, in agreement with perturbation analysis.

We can generalize this conclusion and assert that an oscillating homeostate is more stable than a steady-state homeostate, other differences being relatively small. The spontaneous emergence of resonance, commonly observed in mechanical or fluid mechanical systems, empirically illustrates the higher stability of oscillations over steady state.

Systems of linked oscillators are often observed to synchronize in a process known as entrainment. Christiaan Huygens, the inventor of the pendulum clock, first documented this in 1666, when he observed clocks mounted on a common support spontaneously synchronize. A system of linked, but unsynchronized oscillators, can leak exergy between oscillators. This dissipates exergy and subtracts from the system’s internal work of oscillation. If oscillators are synchronized, however, there are no exergy gradients or associated dissipation, and the internal work rate increases. The KSP provides a general principle that explains both oscillations and synchronization in terms of a general principle, independent of a system’s specific mechanics.

As a final illustration of the KSP, we consider whirlpools. We commonly observe whirlpools, indicating they are more stable than the less organized and lower-complexity radial flow of water directly toward the drain. A simple model comparing the two homeostates shows that the whirlpool requires 4,000 times higher rate of internal work to accelerate the circulation of water in the whirlpool’s vortex compared to the much slower rate of radial flow [4].

It is commonly observed that transitioning from one dissipative process to another increases the rates of exergy supply, dissipation, and entropy production. The transition from conduction to convection clearly exemplifies this. This has led to the idea that “faster is better,” as expressed by the maximum entropy production principle and similar hypotheses. The centrifugal force of a whirlpool’s circulation, however, lowers the water level and pressure over the drain, and this actually lowers its rates of exergy supply and entropy production. The stability of whirlpools disproves maximum entropy production as a general principle. The high stability of a whirlpool results from its higher rate of internal work compared to radial flow.

The Arrows of Evolution

The simplest path to increase a system’s rate of internal work is to expand, which proportionally increases both dissipation and internal work rate. The KSP therefore motivates a dissipative system, or agent, to increase in size or number. This defines the arrow of expansion.

Once an agent reaches its environment’s carrying capacity, it cannot expand any further. It can still increase its internal work, however, by increasing its network of agent-nodes and their interconnectedness and functional complexity. Complexification defers dissipation to do work of sustaining the agent’s function, thereby increasing its rate of internal work. This defines the arrow of functional complexity. Expansion and complexification are the two paths of evolution.

We formally define functional complexity as the ratio of internal work and net exergy supply rates:

C_F= ᵢₙₜ/Ẋₛᵤₚ

Functional complexity is a measurable and well-defined property of an agent’s dissipative process. Convection, the oscillating Brusselator, and the whirlpool all have higher functional complexity than their less complex homeostates.

For a single pass of exergy and a single endergonic node, an agent’s functional complexity can approach unity in the limit of zero dissipation. However, an agent can increase its functional complexity well beyond unity by reprocessing and recycling exergy via feedback loops or by sustaining a network of endergonic nodes. Figure 7–4 illustrates a simple feedback loop resulting in functional complexity greater than unity. Feedback loops are ubiquitous within biological systems, from cells to ecosystems, leading to higher functional complexity. An agent with a fixed rate of exergy supply can increase its functional complexity beyond unity, with no theoretical limit.

Figure 7–4. Simple Feedback Loop. A component (straight vectors) flows through system at a rate (J) of one mass unit/sec. Its specific exergy (x̅) equals two units at input and zero at output. The rates of net exergy input (J×x̅) and total dissipation rate (q̇) equal two energy units/sec. Mass and energy inputs and outputs are balanced.
Exergonic node 2 takes the component with 12 units of specific exergy, dissipates and outputs one unit of energy, and supplies exergy (wavy vector) to endergonic node 1 at a rate of 11 units/sec. Endergonic node 1 uses this exergy to do 10 units/sec of internal work to lift the specific exergy of component from 2 units of to 12 units, and in the process it dissipates one unit of exergy. The system’s internal work of transition equals 10 units. With 2 units of exergy input, its functional complexity C_F equals 5.

If ample resources exist, multiple agents can proliferate and expand, increasing their rates of dissipation and internal work. For finite resources, as agents expand, at some point continued growth becomes constrained by other agents’ resource consumption. Agents then start to engage in a zero-sum competition among other agents for that resource. At some point, an agent reaches its environmental carrying capacity, and further expansion ceases. Complexification and increasing functional complexity then dominates its continued evolution.

Summary

The TCI shifts physics’ focus from reversible states and transitions to stationary dissipative processes. The Second Law and the KSP define opposing goals for a dissipative system. The Second Law guides closed nonequilibrium systems toward states of lower exergy and higher stability. This defines the thermodynamic arrow of dissipation. The KSP guides open dissipative systems toward processes of higher internal work rate and stability. This defines the arrow of evolution. As long as an open system has sufficient resources, evolution predominates, but if the system is cut off from its resources, dissipation takes over.

The KSP has guided the evolution of life through an interplay between Darwinian competition to increase population sizes and the internal work of reproduction, and cooperation to increase its functional complexity. The arrow of expansion drives a population of agents to its environment’s carrying capacity. Once the carrying capacity is reached, the arrow of functional complexity works at scales ranging from pre-biotic chemicals to ecosystems to increase agents’ self-organization and complexity.

--

--

Harrison Crecraft

PhD Geoscientist. Exploring physics’ foundations to reveal the realities of time and evolving complexity.