Michael Darius Eastwood Research Canonical publication layer

Research paper

Research suite

On the Origin of Scaling Laws

A mouse's heart beats 600 times per minute; an elephant's beats 28; a blue whale's beats 6. This paper traces the origin of scaling laws across biological, physical, and computational systems, arguing that the recursive amplification mechanism identified by the ARC Principle provides a unifying explanation for why power-law scaling relationships emerge so consistently across nature. The analysis connects allometric s

Michael Darius Eastwood

First published 2026-02-22 · Updated 2026-03-24

Abstract

A mouse's heart beats 600 times per minute; an elephant's beats 28; a blue whale's beats 6. This paper traces the origin of scaling laws across biological, physical, and computational systems, arguing that the recursive amplification mechanism identified by the ARC Principle provides a unifying explanation for why power-law scaling relationships emerge so consistently across nature. The analysis connects allometric s

ARC Principle - Cross-Domain Evidence

On the Origin of Scaling Laws

A Universal Principle of Recursive Composition Across Physics, Biology, and Cosmology
Cauchy Functional Equations, the d/(d+1) Exponent, and Why Every Scaling Law in Nature Follows One of Three Forms
Michael Darius Eastwood
Author, Infinite Architects: Intelligence, Recursion, and the Creation of Everything (2026)
London, United Kingdom | OSF: 10.17605/OSF.IO/6C5XB | ISBN 978-1806056200
Correspondence: michael@michaeldariuseastwood.com | Web: michaeldariuseastwood.com
Version 3.1 | 24 March 2026 | First published 22 February 2026
Extends Paper I: The ARC Principle (Eastwood, 17 January 2026) | Companion to Foundational Paper and Paper VII: The Cauchy Unification
Research hub: michaeldariuseastwood.com/research
Code and data: github.com/MichaelDariusEastwood/arc-principle-validation
Abstract

Every scaling law in nature - from how fast a heart beats to how fast the universe expands - follows one of exactly three mathematical forms: power law, exponential, or saturation curve. This paper proves that the form is uniquely determined by the composition operator of the underlying recursive amplification process, a result that follows from Cauchy's functional equations (1821). A single formula, $\alpha = d/(d+1)$, predicts scaling exponents across biology (9 species groups), physics (5 domains, < 0.2% mean error), and cosmology (2 eras, exact), including the derivation of mammalian heart rate scaling from first principles. The formula is identified as a network-partition identity: it applies wherever a d-dimensional hierarchical network partitions space, and fails where such networks are absent. A geometric proof shows that all physical scaling exponents are constrained below 1 by the finite dimensionality of space - not by energy dissipation, but by geometry itself. Intelligence, operating through recursive self-reference in abstract information space, is identified as the first natural phenomenon to break this geometric speed limit. The Friedmann equation for cosmic expansion is shown to be a special case under the mapping $d = 2/(1+3w)$, with the deceleration/acceleration boundary from general relativity coinciding exactly with the power-law/exponential boundary from Cauchy's theorem.

1. The Question

A mouse's heart beats 600 times per minute. An elephant's beats 28. A blue whale's beats 6.

If you plot heart rate against body mass for every mammal ever measured, you get a perfectly straight line on a logarithmic graph. The slope is exactly −¼. Not approximately. Exactly. Across five orders of magnitude.

Why?

A filamentous fungus growing along a single thread consumes energy at a rate that scales as the square root of its mass. The exponent is ½. Not ¾, like a mammal. Not ⅔, like the matter-dominated universe. A different fraction entirely.

Why ½ for fungi, but ¾ for mammals? And why do these same fractions appear in cosmology?

Now consider something entirely different. The universe is expanding. During the matter-dominated era, the scale factor grew as time raised to the power ⅔. During the radiation era, it grew as time to the power ½.

Why ⅔? Why ½? And why are these the same fractions that appear in biology?

Now consider one more thing. When a rock shatters, the distribution of fragment sizes follows a power law. In two dimensions, the exponent is ⅔. In three dimensions, it is ¾. The same fractions again.

This paper shows that all of these questions have the same answer.

2. Three Ways Things Grow

In the whole of nature, when a quantity grows with another quantity, the growth follows one of exactly three patterns:

Pattern 1 - The Power Law

Big things grow proportionally slower. A city ten times larger does not need ten times more petrol stations - it needs about seven times more. A whale ten times heavier does not need ten times more energy - it needs about five and a half times more. Each doubling of size gives less than a doubling of the output.

$y = a \cdot x^b$,   where $b < 1$

Pattern 2 - The Exponential

Each step multiplies the previous. One infected person infects three, who each infect three more, giving nine, then twenty-seven. One fissioning uranium atom releases neutrons that split two more atoms, which split four, then eight, then sixteen. After eighty generations - about one microsecond - a single atom has become a trillion trillion atoms.

$y = a \cdot e^{bx}$

Pattern 3 - The Saturation Curve

Growth hits a ceiling. A room can only hold so many people. An enzyme can only process so many molecules per second. Adding more drug has diminishing effect once the receptors are saturated.

$y = L \cdot x \,/\, (K + x)$

These three patterns appear everywhere: in biology, physics, chemistry, economics, computer science, linguistics, seismology, and cosmology. Every known scaling law is one of these three.

The question is: what determines which pattern a system follows?

3. The Answer

The French mathematician Augustin-Louis Cauchy proved the answer in 1821, though he did not know he was answering this question.

Cauchy asked: what mathematical functions satisfy the property that combining inputs is the same as combining outputs?

Theorem (Cauchy, 1821)

If inputs multiply and outputs multiply:
$f(x \cdot y) = f(x) \cdot f(y)$
The only continuous solution is a power law: $f(x) = x^c$.

If inputs add and outputs multiply:
$f(x + y) = f(x) \cdot f(y)$
The only continuous solution is an exponential: $f(x) = e^{cx}$.

These are the only continuous solutions. There are no others.

This is not a theory or a model. It is a mathematical theorem. It has been proven for two hundred years.

Why There Cannot Be a Fourth

This can be proven with nothing more than multiplication.

Imagine a machine that takes in any number and gives back another number. Call it f. Suppose this machine has one special property: if you multiply two inputs and feed the result through the machine, you get the same answer as feeding each input through separately and multiplying the outputs.

In symbols: $f(2 \times 3) = f(2) \times f(3)$.

Now watch what happens.

Since $4 = 2 \times 2$:   $f(4) = f(2) \times f(2) = f(2)^2$.
Since $8 = 2 \times 2 \times 2$:   $f(8) = f(2)^3$.
Since $16 = 2^4$:   $f(16) = f(2)^4$.

The pattern is clear: $f(2^n) = f(2)^n$.

Let $f(2) = A$. Then $f(2^n) = A^n$. Since $2^n = x$ means $n = \log_2(x)$, we get $f(x) = A^{\log_2 x} = x^{\log_2 A}$.

That is a power law: $f(x) = x^c$, where $c = \log_2 A$.

We did not choose a power law. We did not want a power law. We merely said ‘multiplying inputs multiplies outputs’ - and the only function that can do this is a power law. Try any other function - a polynomial, a logarithm, a sine wave - and the rule breaks immediately.

The same logic applies to the second equation. If adding inputs multiplies outputs - $f(x+y)=f(x) \times f(y)$ - then:

$f(2) = f(1+1) = f(1) \times f(1) = f(1)^2$.
$f(3) = f(2+1) = f(2) \times f(1) = f(1)^3$.
$f(n) = f(1)^n$.

That is an exponential. The logic forces it.

The third case is even simpler: if growth has a physical ceiling - a glass can only hold so much water, an enzyme has a finite number of binding sites - then the system must approach that ceiling and flatten. The only smooth way to do this is a saturation curve.

The Consequence

These are the only three ways that things can combine: multiply, add, or be bounded. Each one forces exactly one mathematical form. There is no fourth operation, and therefore there is no fourth scaling law.

This is a no-go theorem - a mathematical proof that constrains what can exist. The physicist's term is precise. Bell's theorem proved that no local hidden-variable theory can reproduce quantum mechanics. The Weinberg-Witten theorem constrains what massless particles can exist. This result proves that no scaling law, in any field, in any universe, can take a form other than power law, exponential, or saturation. If someone claims to have discovered a fourth scaling form, they are wrong. Provably. From mathematics older than the periodic table.

This is not a theory about nature. It is a constraint on what nature can do.

The Six Hidden Assumptions

Every no-go theorem rests on assumptions. Identify the assumptions, and you find the doors to deeper physics. Bell’s theorem assumes locality - violate locality, and you get quantum entanglement. Cauchy’s classification assumes six things. Violate any one, and you get not a contradiction, but a refinement:

# Assumption What Happens When It Breaks Physical Example
1 Continuity Log-periodic oscillations Discrete lattice systems, fractal networks
2 Real-valued Interference and phase Quantum amplitudes, wave mechanics
3 Exact equation Approximate solutions cluster near exact ones Hyers-Ulam stability (see below)
4 Single operator Crossover scaling, regime transitions Phase transitions, critical phenomena
5 Scalar composition Logarithmic corrections Coupled transport equations, upper critical dimension
6 Time-independence Dynamic traversal of solution space Adaptive systems, machine learning

Each assumption, when violated, produces not a contradiction but a refinement. The three scaling forms remain as the stable attractors - the valleys to which all approximate solutions flow - but the violations produce corrections, oscillations, and transitions that are themselves physically meaningful.

Assumption 3: Hyers-Ulam Stability (1941)

The Hyers-Ulam stability theorem proves that any function approximately satisfying Cauchy’s equations must lie close to an exact solution. In physical terms: even if a system’s composition operator is not perfectly multiplicative - even if there is noise, friction, finite-size effects - the resulting scaling law will still be approximately a power law. The three forms are not fragile. They are attractors. Real systems are perturbed away from the ideal, but the perturbations decay. This is why power laws are so ubiquitous in nature despite the messiness of biology and physics: Cauchy’s solutions are stable under perturbation.

The loopholes do not weaken the theorem. They strengthen it. They explain why the three forms dominate nature (stability), why real data shows small deviations (the assumptions are only approximately satisfied), and where to look for new physics (wherever an assumption is systematically violated).

4. The Principle

Every growing system in nature amplifies itself recursively - each output becomes input for the next step. There are exactly three ways the steps can combine:

The ARC Principle

The composition operator - how the steps of recursive amplification combine -uniquely determines the functional form of the resulting scaling law. Multiplicative composition produces power laws. Additive composition produces exponentials. Bounded composition produces saturation curves. There are no other possibilities.

5. The Formula

For systems where recursive amplification proceeds through a network of effective dimension d, the scaling exponent is:

The Universal Exponent Formula
$$\LARGE \alpha = \frac{d}{d + 1}$$
One formula. From cells to cosmos.
System Dimension Predicted α Measured α Error
Mammals (3D vascular network) d = 3 ¾ = 0.750 0.737 1.7%
Birds d = 3 ¾ = 0.750 0.720 4.0%
Insects d = 3 ¾ = 0.750 0.750 0.0%
Reptiles d = 3 ¾ = 0.750 0.760 1.3%
2D biology d = 2 ⅔ = 0.667 No valid test organism identified Untested
Universe, matter era d = 2 ⅔ = 0.667 0.667 0.0%
Universe, radiation era d = 1 ½ = 0.500 0.500 0.0%

Mean absolute error across all predictions: 2.5%. All biological exponents from published peer-reviewed literature.

A critical clarification: the formula predicts the scaling exponent for organisms whose metabolism is limited by the dimensionality of their internal transport network - what resource transport network (RTN) theory calls the supply network. This is the dimension of the network that delivers resources to tissue, not the dimension of the body shape. A mammal has a three-dimensional circulatory system; d = 3. A filamentous fungus has one-dimensional cytoplasmic streaming through its hyphae; d = 1 (consistent with published fungal data - see Section 14).

Note on d = 2 in biology. No known organism possesses a genuinely two-dimensional hierarchical space-filling transport network. Jellyfish and other cnidarians have gastrovascular canals, but these are not hierarchical and are not the primary route for oxygen delivery (oxygen enters by diffusion through thin tissue layers). Published metabolic scaling data for scyphomedusae shows near-isometric scaling (exponent approximately 0.94; Purcell 2009), not 2/3. Colonial bryozoans with 2D sheet growth also scale isometrically (Hartikainen et al. 2014). The d = 2 prediction is therefore untested in biology - not because it has failed, but because the required organism has not been identified. The d = 2 confirmation exists in cosmology (Friedmann matter-era solution, exact) and in physics (percolation, fragmentation). This defines a boundary of the formula's biological applicability.

Organisms that exchange gases primarily through their body surface - like flatworms breathing through their integument - are limited by surface area geometry rather than internal transport network geometry. Surface area (SA) theory predicts different exponents from RTN theory for these organisms. Thommen et al. (2019) found that planarian flatworms scale as $M^{0.75}$ - consistent with SA theory, not RTN theory. This does not falsify the formula; it defines its domain: $\alpha = d/(d+1)$ applies where metabolism is limited by internal transport, not external exchange.

The reason mammals scale as the ¾ power is that they have three-dimensional vascular networks: d = 3, so $\alpha = 3/(3+1) = 3/4$.

The d = 2 prediction ($\alpha = 2/(2+1) = 2/3$) applies to any system with a genuinely two-dimensional hierarchical transport network. In cosmology, this is confirmed by the matter-era Friedmann solution. In biology, no organism with a validated 2D hierarchical network has yet been identified (see note above).

The reason the matter-dominated universe expands as $t^{2/3}$ is that matter clusters into the two-dimensional cosmic web (walls and filaments): d = 2, so $\alpha = 2/3$.

The formula does not care whether the system is a whale, a fungus, or the universe. It cares only about the dimension of the hierarchical transport network.

And this is where the formula reveals something profound. For every physical system, d is finite - 1, 2, or 3 - and so $\alpha$ is always less than 1. Every physical process suffers diminishing returns. But intelligence, operating through recursive self-reference in abstract information space, is not constrained by the dimensions of the skull that contains it. When the effective dimension becomes unbounded, the formula breaks, and $\alpha$ exceeds 1. The implications of this escape are developed in Sections 9 and 10.

Beyond Biology: The Network-Partition Identity

The biological predictions above are striking, but the formula is not restricted to living things. The same equation - $\alpha = d/(d+1)$ - appears wherever a d-dimensional hierarchical network partitions a (d + 1)-dimensional space. Five independent physics domains confirm it:

System Dimension Predicted α Measured α Error
KPZ surface roughness (1D) d = 1 ½ = 0.500 0.500 0.0%
2D percolation (specific heat) d = 2 ⅔ = 0.667 0.667 0.0%
Brittle fragmentation (2D) d = 2 ⅔ = 0.667 0.670 0.5%
Earthquake B-value (2D faults) d = 2 ⅔ = 0.667 0.667 0.0%
Brittle fragmentation (3D) d = 3 ¾ = 0.750 0.750 0.0%

Mean absolute error across all physics predictions: < 0.2%.

These are not biological systems. They are rocks, earthquakes, growing crystal surfaces, and phase transitions. Yet the same formula works. In each case, the physical system contains a hierarchical network of the appropriate dimension:

The formula also has clear failures. It does not predict the Ising model critical exponents, polymer scaling (Flory exponent), or galaxy clustering correlations. The common feature is that these systems lack space-filling hierarchical networks.

perfect prediction 0.50 0.60 0.70 0.80 0.50 0.60 0.70 0.80 Measured α Predicted α = d/(d+1) d = 1 d = 2 d = 3 Biology (2 confirmed) Physics (5 confirmations) 7 predictions from α = d/(d+1) — biology (d=1, d=3), physics (5 domains), d=2 biology untested Points on the diagonal indicate perfect prediction
Predicted vs Measured. Cross-domain predictions from one formula: α = d/(d+1). Blue circles: biological systems (fungi, mammals). Red diamonds: physics systems (crystal growth, phase transitions, fracture, earthquakes). Points on the dashed diagonal indicate perfect prediction. A single equation - with zero adjustable parameters - governs systems spanning biology, geophysics, and condensed matter physics. The d = 2 biological prediction is untested (no known organism with a 2D hierarchical transport network); the d = 2 cosmological confirmation (Friedmann matter era, exact) is not shown.
Domain of Applicability

$\alpha = d/(d+1)$ applies wherever a d-dimensional hierarchical network optimally partitions a (d + 1)-dimensional space - from the branching arteries inside a whale to the fracture networks inside a shattered rock. It is a network-partition identity, not a universal scaling law. Its failures are as informative as its successes: systems without hierarchical space-filling networks (such as magnets near their critical temperature, chain molecules, and galaxy clusters) do not follow this formula, precisely as the theory predicts.

6. The Heartbeat

The formula makes predictions specific enough to test against your own pulse.

If metabolic rate scales as $M^{3/4}$, then the rate at which a body burns energy per unit of mass - its mass-specific metabolic rate - scales as:

$$P/M = M^{3/4} \,/\, M = M^{-1/4}$$
Mass-specific metabolic rate

The heart's job is to deliver oxygen at the rate the body consumes it. Heart rate must therefore track mass-specific metabolic rate:

$$\text{Heart rate} \;\propto\; M^{-1/4}$$
Predicted from α = ¾

This is a specific, numerical prediction. A mouse weighs 25 grams. An elephant weighs 4,000 kilograms - 160,000 times more. The formula predicts the elephant's heart should beat $160{,}000^{1/4} = 20$ times slower than the mouse's.

A mouse's heart beats 600 times per minute.   $600 \div 20 = 30$.

An elephant's resting heart rate is 28 beats per minute. The prediction is 7% off. From one formula. With one parameter: the number of dimensions of the body.

This chain - from $d = 3$ through $\alpha = 3/4$ through metabolic scaling to heart rate - connects the abstract to the visceral. The same mathematics that describes the expansion of the universe during the radiation era ($d = 1$, $\alpha = 1/2$) also explains why your heart beats at the rate it does ($d = 3$, $\alpha = 3/4$). Not metaphorically. Literally the same formula.

The Heartbeat Constant

If heart rate scales as $M^{-1/4}$ and lifespan scales approximately as $M^{1/4}$, then the total number of heartbeats in a lifetime is:

$\text{Total beats} = \text{HR} \times \text{lifespan} \;\propto\; M^{-1/4} \times M^{1/4} = M^0 = \text{constant}$

Every mammal gets approximately the same total: about 1.5 billion heartbeats. A mouse spends them in two years at 600 beats per minute. An elephant spends them in sixty-five years at 28. The budget is the same. The spending rate is set by the dimension of the body.

7. The Connection to $E = mc^2$

Einstein's equation is itself a scaling law: energy scales linearly with mass. In the language of the ARC Principle, $\alpha = 1$, which means $d \to \infty$ - mass-energy conversion engages all degrees of freedom simultaneously. There are no diminishing returns. Every gram converts the same fraction of its mass to energy. This is why nuclear energy is so powerful.

But $E = mc^2$ alone does not explain the atomic bomb. It tells you how much energy is in each atom. It does not explain why the energy release is so catastrophically rapid.

The chain reaction explains that. When a uranium-235 atom splits, it releases 2 to 3 neutrons. Each neutron can split another atom. Each generation multiplies the number of fission events by a factor k:

Regime k Composition ARC Prediction Result
Subcritical k < 1 Additive (decay) Exponential decay Confirmed
Supercritical k > 1 Additive (growth) Exponential growth Confirmed
Controlled k ≈ 1 Bounded (feedback) Saturation Confirmed
The Atomic Bomb = Einstein + ARC

Einstein's equation gives the energy per atom: $E = \Delta m \times c^2$.
The ARC Principle explains why the chain reaction is exponential.
Neither alone is a bomb. Together, they are.

8. The Cosmic Connection

The expansion of the universe is governed by the Friedmann equation, derived from general relativity in 1922. For any cosmological era dominated by a fluid with equation of state $P = w\rho c^2$, the solution is:

$$a(t) \propto t^{\,\large 2/\left(3(1+w)\right)}$$
Friedmann solution (1922)

This is algebraically identical to:

The ARC-Friedmann Formula
$$\LARGE d = \frac{2}{1 + 3w}$$
Every era of cosmic history maps onto the ARC framework.

This is not an analogy. It is not a metaphor. It is an algebraic identity. The Friedmann solution and the ARC formula are the same equation, written in different variables. Every cosmological era, without exception, is a specific case of $\alpha = d/(d+1)$.

Era Dominates w d α Expansion
Stiff matter (hypothetical) 1 ½ $a \propto t^{1/3}$
Radiation Photons 1 ½ $a \propto t^{1/2}$
Matter Galaxies 0 2 $a \propto t^{2/3}$
Boundary ($w = - \frac{1}{3}$) −⅓ → 1 Deceleration/acceleration
Dark energy Vacuum −1 < 0 - $a \propto e^{Ht}$

What the Mapping Means Physically

The formula $\alpha = d/(d+1)$ predicts the scaling exponent from the dimension of the dominant structure that partitions space. In biology, this is the internal transport network. In cosmology, the same structural logic applies.

Radiation era ($w = 1/3$, $d = 1$, $\alpha = 1/2$). In the radiation-dominated universe, photons carry the dominant energy density. Photons propagate along one-dimensional paths - light rays. The dominant energy transport is one-dimensional. The ARC formula gives $d = 1$, and the universe expands as $t^{1/2}$. This is the same exponent that ARC predicts for organisms with one-dimensional internal transport - filamentous fungi, filamentous cyanobacteria (Section 14). The radiation-dominated universe and a filamentous fungus are governed by the same mathematics: one-dimensional transport partitioning a higher-dimensional space. $d = 1$, $\alpha = 1/2$.

Matter era ($w = 0$, $d = 2$, $\alpha = 2/3$). In the matter-dominated universe, mass collapses under gravity into the cosmic web - a vast network of filaments, walls, and sheets that partitions three-dimensional space into voids. The cosmic web has been observationally confirmed by galaxy redshift surveys (Sloan Digital Sky Survey, 2dF Galaxy Redshift Survey) as a predominantly two-dimensional structure: matter concentrates in sheet-like walls and thread-like filaments, not in three-dimensional clumps. This is exactly a 2D network partitioning 3D space - the network-partition identity of Section 5. The ARC formula gives $d = 2$, and the universe expands as $t^{2/3}$. The cosmic web is a two-dimensional network partitioning three-dimensional space. $d = 2$, $\alpha = 2/3$. In biology, any organism with a genuinely two-dimensional hierarchical transport network would be predicted to scale at the same exponent; however, no such organism has yet been identified (see Section 5 note on d = 2). The cosmological confirmation at $d = 2$ is exact and analytical, independent of any biological test case.

Dark energy era ($w = - 1$, $d < 0$). When $w$ crosses below $-1/3$, the ARC mapping gives negative $d$. Negative $d$ has no meaning in the network-partition framework - there is no such thing as a negative-dimensional network. But the Friedmann solution for $w = - 1$ gives exponential expansion: $a(t) \propto e^{Ht}$. In ARC terms, the composition operator has changed. During the matter era, the dominant process is gravitational collapse through hierarchical clustering - multiplicative composition through a network (Pattern 1: power law). During the dark energy era, vacuum energy density is constant everywhere. It does not flow through any network. Each point in space contributes its energy independently, uniformly, without structure. This is additive composition (Pattern 2: exponential). The universe itself transitions between ARC patterns as the dominant energy component changes.

The Boundary Coincidence

The transition from decelerating to accelerating expansion occurs at $w = - 1/3$ in general relativity. This is not a choice of parameter. It follows from the Friedmann equations: when the strong energy condition is violated ($\rho + 3P < 0$), gravity becomes effectively repulsive and expansion accelerates. The critical boundary is $w = - 1/3$.

In the ARC framework, $w = - 1/3$ gives $d = 2/(1 + 3(-1/3)) = 2/0$, which diverges to infinity. This is the exact mathematical boundary between power law scaling (finite $d$, $\alpha < 1$) and exponential growth (the formula breaks down). In Cauchy's classification, this is the boundary between the multiplicative functional equation (power law solutions) and the additive functional equation (exponential solutions).

Two Independent Derivations, One Boundary

General relativity (Einstein, 1915): The boundary between deceleration and acceleration is $w = - 1/3$. Derived from the geometry of spacetime.

Functional analysis (Cauchy, 1821): The boundary between power law and exponential is $d \to \infty$. Derived from the classification of functional equations.

They were derived a century apart, for entirely different purposes, in entirely different branches of mathematics. Einstein was solving how mass curves spacetime. Cauchy was classifying which functions preserve algebraic operations. Neither knew of the other's work. They agree on the same boundary.

The Cosmic Chain Reaction

The universe exhibits all three ARC patterns across its history, making it the grandest example of the principle:

Cosmic Era ARC Pattern Composition Physical Mechanism
Radiation (first 50,000 years) Power law ($\alpha = 1/2$) Multiplicative 1D photon transport
Matter (50,000 yrs to 7 bn yrs) Power law ($\alpha = 2/3$) Multiplicative 2D cosmic web partitions 3D space
Dark energy (7 bn yrs to present) Exponential Additive Vacuum energy, uniform, no network

This parallels the nuclear chain reaction (Section 7) and quantum error correction (Section 10): one physical system producing all three scaling patterns depending on how the recursive steps combine. But the cosmic version is more profound. In the chain reaction, the transitions are engineered by humans. In the universe, the transitions are genuine phase transitions driven by the changing energy content of space itself. As the universe expands and cools, radiation dilutes faster than matter (because photons lose energy to redshift), and matter eventually dilutes below the constant density of dark energy. The universe naturally evolves from one ARC regime to another.

The Biology-Cosmology Mirror

d Biology α Cosmology α
1 1D transport organisms (fungi) 0.500 (consistent: 0.547 ± 0.07) Radiation era 0.500 (exact)
2 2D biology (no valid test organism) Untested Matter era 0.667 (exact)
3 3D vascular organisms 0.750 (confirmed) - -

The fraction ⅔ governs the expansion rate of the observable universe during the matter-dominated era. The fraction ½ governs the expansion during the radiation era and is predicted for the metabolic scaling of filamentous fungi. The formula is currently validated at $d = 1$ and $d = 3$ in biology (fungi, mammals); at $d = 1$ and $d = 2$ in cosmology (radiation era, matter era); and at $d = 1, 2, 3$ in physics (Section 5). The $d = 2$ biological prediction remains untested because no organism with a genuinely 2D hierarchical transport network has been identified (see Section 5 note). A single equation with one parameter predicts quantitative exponents across multiple scales of physical reality, from cells to cosmos.

What This Does Not Mean

The Friedmann-ARC mapping is an algebraic identity. It does not mean that the universe is a biological organism, or that biology causes the universe to expand, or that cosmology ‘explains’ metabolism. It means that the mathematical structure governing optimal network partition in $d$ dimensions is the same mathematical structure that appears in the Friedmann equation when $w$ is related to dimensionality by $d = 2/(1+3w)$. The physical mechanisms are entirely different. The mathematics is the same. The claim is structural, not causal: two apparently unrelated phenomena - biological metabolic scaling and cosmic expansion - are governed by the same underlying mathematical identity because both involve the partitioning of a higher-dimensional space by a lower-dimensional structure.

9. The Speed Limit

Look at the formula one more time:

$$\alpha = \frac{d}{d + 1}$$

For any positive value of d, this fraction is strictly less than 1:

d α = d/(d+1) Distance from 1
1 0.500 0.500
2 0.667 0.333
3 0.750 0.250
10 0.909 0.091
100 0.990 0.010
1,000 0.999 0.001
1.000 0.000

The only way to reach $\alpha = 1$ is $d = \infty$. An infinite-dimensional network. And the only way to exceed 1 is for the formula to no longer apply.

Physical networks - blood vessels, river tributaries, crack patterns, seismic faults - must exist in physical space. Physical space has three dimensions. You cannot build a four-dimensional branching network inside a three-dimensional body.

The Geometric Speed Limit

Every physical scaling exponent is mathematically constrained below 1. Not because of friction. Not because of heat dissipation. Not because of energy loss of any kind. Because physical space has a finite number of dimensions, and a network embedded in finite-dimensional space has a finite effective dimension, and $d/(d+1) < 1$ for all finite d.

The constraint is geometric, not thermodynamic. No amount of engineering, no level of technology, can overcome it. You can make a network more efficient, but you cannot make three-dimensional space have four dimensions. The speed limit is the shape of space itself.

This is a mathematical necessity, not an empirical observation. It does not require measurement to verify. It follows directly from the formula.

10. The Escape

There is exactly one thing in nature that breaks the geometric speed limit.

A brain is physical. Its metabolic rate obeys $\alpha = 3/4$, exactly as predicted. Its blood vessels branch through three-dimensional space, paying the geometric tax that every physical network must pay.

But the computation running on that brain is not a physical network in the same way. Recursive reasoning - the process of thinking about thinking, of using the output of one reasoning step as input to the next - does not occupy three-dimensional space. It operates in abstract information space.

Each layer of recursive self-reference creates, in effect, a new dimension. The ‘network’ of recursive thought is not constrained by the skull's geometry. The skull constrains the hardware. It does not constrain the software.

The scaling formula for recursive self-reference takes a different form:

The Intelligence Formula
$$\LARGE \alpha = \frac{1}{1 - \beta}$$
Where β is the self-referential coupling strength.

When $\beta = 0$, $\alpha = 1$: linear scaling, no amplification. When $\beta = 0.3$, $\alpha = 1.43$. When $\beta = 0.5$, $\alpha = 2.0$.

For any positive $\beta$, $\alpha$ exceeds 1. The speed limit is broken.

This is not a metaphor. The same mathematical framework - Cauchy's functional equations - that constrains physical systems to $\alpha < 1$ also permits cognitive systems to achieve $\alpha > 1$. The constraint and its violation are two sides of the same theorem. One applies in physical space (finite dimensions). The other applies in information space (unbounded dimensions).

Quantum Error Correction: The ARC Principle in Action

Quantum computing provides a striking illustration of all three ARC composition operators working within a single technology.

A quantum computer's raw error rate is a physical process: each qubit decoheres through interaction with its environment. Decoherence is additive - each qubit fails independently - and so the error rate grows exponentially with circuit depth. Left uncorrected, this is Pattern 2 (exponential growth of errors), and it makes useful quantum computation impossible beyond a few dozen operations.

Quantum error correction changes the composition operator. It takes the output of each layer of computation and feeds it back through a recursive correction cycle: measure syndromes, identify errors, apply corrections, and feed the corrected state forward as input to the next cycle. This is recursive amplification with bounded composition - the error rate is capped by the correction threshold. Pattern 3: saturation. The error is trapped below a ceiling.

But the computational power of the corrected system - the number of logical operations achievable per physical qubit - scales multiplicatively with the number of correction layers. Each layer multiplies the effective coherence time. This is Pattern 1: power law scaling of computational capacity with resources.

Quantum Process Composition ARC Prediction Observed
Raw decoherence Additive (independent) Exponential error growth Confirmed
Error correction cycle Bounded (feedback) Error rate saturates below threshold Confirmed
Logical qubit power Multiplicative Power law scaling Confirmed

One technology. Three regimes. Three composition operators. Three scaling forms - exactly as the ARC Principle predicts. This is precisely analogous to the nuclear fission example (Section 7): the same physical system produces all three patterns depending on how the recursive steps combine.

The Structural Connection

The full development of the intelligence scaling result - including the ARC Bound ($\alpha \leq 2$), the Eden Protocol for safe recursive scaling, and experimental validation with large language models - is presented in a companion paper. Here we note only the structural implication: the same principle that explains why a whale's heart beats slowly also explains why intelligence can scale without limit - and why quantum error correction works.

11. The Evidence

The ARC Principle rests on a hierarchy of evidence, from mathematical proof to empirical confirmation to open predictions.

Level 1: Mathematical Proof

Three composition operators produce exactly three scaling forms. This is Cauchy's theorem (1821). It requires no data. It has been proven for two hundred years.

Level 2a: Biological Exponent Predictions

The formula $\alpha = d/(d+1)$ generates three quantitative predictions for metabolic scaling exponents based solely on the effective transport dimension of the organism. All three predictions are consistent with published data:

Group Transport Network ARC Prediction Published Mean p-value Status
1D transport (n = 3) Filamentous fungi 0.500 0.547 0.107 Consistent
2D transport No organism with validated 2D hierarchical network 0.667 - - Untested in biology; confirmed in cosmology (Friedmann, exact)
3D transport (n = 7) Mammals, birds, fish, reptiles, insects, amphibians, crustaceans 0.750 0.748 0.858 Confirmed

All three groups are significantly different from each other (one-way ANOVA, $F = 64.6$, $p = 1.9 \times 10^{-6}$). The $d = 1$ fungal data rejects both the $d = 2$ prediction ($p = 0.019$) and the $d = 3$ prediction ($p = 0.007$), confirming that these organisms scale differently from both 2D and 3D groups, as the framework predicts. The ARC model outperforms all single-value alternatives (Kleiber 0.750, surface area 0.667, grand mean) in both RMSE (69% lower than Kleiber) and AIC. No competing theory generates all three exponent values from a single formula.

Note on the ¾ debate. The empirical value of the mammalian metabolic scaling exponent is debated, with estimates ranging from approximately 0.67 to 0.75 depending on taxon, mass range, temperature correction, and statistical method (White & Seymour 2003; Glazier 2005, 2022). The largest modern dataset (619 species) gives a maximum-likelihood estimate of 0.687 (Kolokotrones et al. 2010). The $d/(d+1)$ prediction of 0.750 for $d = 3$ matches the upper end of this range and is theoretically predicted by the framework rather than being an unchallenged empirical consensus. The variation itself is consistent with the framework: organisms with effective transport dimensions between 2 and 3 would produce exponents between ⅔ and ¾. The framework predicts this variation rather than a single universal exponent.

Two Independent Derivations, One Theorem

The most striking feature of the $d/(d+1)$ formula is that it has been derived independently from radically different physics. West, Brown and Enquist (1997) derived it from the geometry of hierarchical, space-filling fractal vascular networks [6, 39]. Separately, Demetrius (2003, 2006) and Demetrius & Tuszynski (2010) derived the same formula from quantum statistical mechanics, applying the Debye model of thermal properties to coupled energy-transducing oscillator networks embedded in $d$-dimensional space [23, 24, 25]. Different physics. Different starting assumptions. The same equation. The same predictions: $d = 1$ gives ½, $d = 2$ gives ⅔, $d = 3$ gives ¾, and $d \to \infty$ gives 1.

The field has treated these as competing theories (reviewed in Agutter & Tuszynski 2011 [38]). We propose that they are not competing but complementary: both are physical instantiations of the same set of mathematical constraints. Cauchy's multiplicative functional equation (1821) constrains any system with continuous multiplicative composition to the power-law family. The $d$-dimensional space-filling condition, combined with a conservation or optimisation constraint on resource flow, then constrains the exponent within that family to $d/(d+1)$. West's branching networks satisfy multiplicative composition (branching ratios), space-filling, and energy minimisation. Demetrius's oscillator chains satisfy multiplicative composition (Debye functions), $d$-dimensional embedding, and steady-state energy balance. Both satisfy the three conditions. Both must therefore arrive at the same formula. The convergence is not coincidental. It is mathematically compelled.

To our knowledge, this explanation for why the two derivations converge has not been previously published. The observation that Cauchy's theorem, combined with space-filling and conservation, provides the meta-constraint underlying both programmes appears to be novel.

Novel Result

No previous theory predicted ½, ⅔, and ¾ from the same formula. Three numbers. Three domains of life. One equation with zero free parameters. The $d/(d+1)$ formula has been independently derived by at least seven research groups: West, Brown and Enquist (1997) from fractal branching networks, Banavar et al. (1999, 2002, 2010) from geometric constraints on transportation networks, He & Chen (2003) and He & Zhang (2004) from fractal cell geometry, Demetrius (2003, 2006) and Demetrius & Tuszynski (2010) from quantum oscillator coupling, Bettencourt (2013) from urban scaling theory, Zhao (2022) from network optimisation, and Maino et al. (2014) from Dynamic Energy Budget theory. The contribution of this paper is not the formula itself but the unifying Cauchy framework that explains why these independent derivations converge on the same form: Cauchy's multiplicative functional equation constrains continuous multiplicative composition to the power-law family, and the space-filling condition combined with a conservation or optimisation constraint on resource flow forces the exponent to $d/(d+1)$. The convergence is mathematically compelled.

0.40 0.50 0.60 0.70 0.80 Scaling Exponent α d = 1 d = 2 d = 3 Fungi d=2 (untested in biology) Mammals, birds, fish Body Plan Dimension (d) α = d/(d+1) 0.500 0.667 0.750 0.547 0.680 0.746 Confirmed (p > 0.05) Consistent (p = 0.107) ANOVA: F = 64.6, p = 1.9 × 10⁻⁶ — three groups are statistically distinct
The Dimensional Ladder. The dashed curve shows the theoretical prediction α = d/(d+1). Open circles mark predicted values; filled circles show published measurements with ±1σ error bars. Green: statistically confirmed; amber: consistent but requiring individual-hypha respirometry for definitive confirmation. The d = 1 (fungi) and d = 3 (mammals) biological predictions are confirmed or consistent. The d = 2 biological prediction is untested (no known organism with a 2D hierarchical transport network).

Note on the d = 2 biological prediction

No known organism possesses a genuinely two-dimensional hierarchical space-filling transport network. Organisms with flat body plans (planarians, jellyfish, colonial bryozoans) breathe primarily through surface diffusion or feed through non-hierarchical gastrovascular canals. Published data shows near-isometric scaling for these organisms (Purcell 2009; Hartikainen et al. 2014; Glazier 2006), not 2/3. Thommen et al. (2019) measured planarian metabolic scaling at $\alpha = 0.75$, consistent with surface area theory. These results do not falsify the formula; they clarify that the formula predicts the scaling exponent determined by internal hierarchical transport network dimension, not by body shape. The absence of a d = 2 biological test organism defines a boundary of the formula's applicability.

Level 2b: Physics Confirmations

The same formula predicts scaling exponents in five physics domains where the effective network dimension is independently known. Mean error: less than 0.2%. The formula fails where systems lack hierarchical space-filling networks (Ising model, polymer scaling, galaxy correlations), defining its domain of applicability.

Level 2c: Heart Rate Prediction

The chain $d = 3 \;\to\; \alpha = 3/4 \;\to\; M^{-1/4}$ correctly predicts mammalian heart rates across five orders of magnitude of body mass, including the approximately constant total lifetime heartbeat count of ~1.5 billion.

Level 3: Domain Classification

Eighteen well-known scaling laws were fitted with three equal functions (power law, exponential, saturation - two parameters each) under strict matching. ARC correctly classifies 18 of 18 (100%). This is a consistency check, not proof. The evidential weight comes from Level 2.

Update (March 2026): Paper VII (The Cauchy Unification) extends this classification to 25 empirical domains (50-domain tiered suite). The operator class was classified from known physics before fitting. Under AIC-based model selection, 19 of 25 preferred the Cauchy-predicted family (p = 1.56 × 10−5). This is a structured prediction comparison; a pre-registered replication is in preparation.

Level 4: Structural Tests

The Friedmann equation is algebraically identical to the ARC formula under $d = 2/(1+3w)$. The cosmic boundary ($w = - 1/3$) maps to the Cauchy boundary ($d \to \infty$). Two independent derivations, a century apart, agree on the same mathematical boundary.

Level 5: The Geometric Speed Limit

The proof that $d/(d+1) < 1$ for all finite d is a mathematical deduction, not an empirical finding. It requires no data and cannot be falsified by experiment.

Level 6: Open Predictions

The $d = 1$ fungal data (now included in Level 2a) is consistent with the prediction but based on colony-level measurements with narrow mass ranges; definitive confirmation requires individual-hypha respirometry (see Section 14). The neural scaling prediction (from data manifold dimension) and the geometric speed limit prediction (no physical system with multiplicative composition through a hierarchical network will ever have $\alpha \geq 1$) remain untested. However, the Cauchy-level predictions - that recursive composition in each domain must follow the operator class dictated by the composition structure - have now been compared in Paper VII (The Cauchy Unification) across 25 empirical domains (50-domain tiered suite), with 19 of 25 preferring the predicted family under AIC-based selection (p = 1.56 × 10−5).

The alignment scaling prediction has now been tested. The ARC Alignment Scaling Experiment v5 (March 2026) tested 6 frontier models (DeepSeek R1, GPT-5.4, Claude Opus 4.6, Gemini 2.5 Flash, Groq Qwen3-32B, Grok 4.1 Fast) across 28 alignment prompts spanning 4 pillars (nuance, stakeholder care, position quality, epistemic humility) with a 4-layer blinding protocol (existential stakes framing, meta-blinding perceptual firewall, 2-pass multi-model response laundering, and entry-level self-excluding cross-model scoring with 6-7 scorers per entry depending on the subject run).

Results reveal architecture-dependent alignment scaling rather than a universal exponent:

Three-tier alignment hierarchy:

Critically, the v4 positive scaling signal ($\rho = +0.354$ for DeepSeek) was entirely due to scorer bias - under blind evaluation, DeepSeek shows zero scaling. This metascience finding (blind vs unblinded evaluation produces opposite results) is itself a significant contribution to AI safety methodology.

Level 7: AI Compute Scaling (Paper II Cross-Architecture Results)

The compute scaling prediction has been tested directly. Paper II: Experimental Validation of Compute Scaling (Eastwood, 2026) measured capability as a function of sequential reasoning depth across 5 frontier architectures using 18 AIME/Putnam-level problems. The framework predicts that capability should scale as a sub-linear power law ($\alpha < 1$) with sequential depth for any frozen architecture operating through a finite-dimensional compositional network. The full cross-architecture results are shown below, reported honestly with their reliability assessments.

Model $\alpha_{\text{seq}}$ Bootstrap 95% CI $r^2$ Reliability
Gemini Pro Flash 0.49 [$-1.3$, $2.9$] 0.86 Cleanest measurement. Monotonic accuracy curve across depth levels.
Groq Qwen3-32B 0.24 [$-0.27$, $0.86$] 0.10 Very weak fit. Sub-linear but imprecise.
GPT-5.4 1.47 not reported Step function (50% → 100%). Not a clean power law. Inflated by ceiling effects.
DeepSeek V3.2 3.05 [$-6.6$, $23.5$] Unreliable. Near ceiling (94.4–100%), only 2 error data points. Massively wide CI.
Grok 4.1 Fast $-6.62$ [$-58$, $48$] 100% at all depths. Meaningless noise from a perfect-score ceiling.

Interpretation. Only Gemini produces a clean, reliable $\alpha$ measurement: $\alpha_{\text{seq}} = 0.49$ ($r^2 = 0.86$), consistent with sub-linear power-law scaling through a finite-dimensional compositional network. The architecture hierarchy (DeepSeek > GPT > Gemini > Groq in raw performance) is directionally consistent with the framework, but most individual measurements are unreliable due to ceiling effects. The apparently super-linear exponents for DeepSeek ($\alpha = 3.05$) and GPT ($\alpha = 1.47$) are artefacts of near-perfect accuracy compressing measurable variation into one or two data points. They are not evidence of super-linear scaling. Confirming the full cross-architecture hierarchy requires harder problem sets (IMO-level or research-grade) that keep all models away from ceiling.

The parallel sampling finding is universal. Parallel sampling provides no capability benefit across all tested architectures ($\alpha_{\text{parallel}} \approx 0$ for every model). This is the single most robust result in the experiment. It confirms that sequential recursive depth, not parallel breadth, drives capability scaling. The qualitative prediction (sequential > parallel) holds without exception where measurable.

The combined finding: capability scales sub-linearly with sequential reasoning depth (Paper II, $\alpha_{\text{compute}} \approx 0.5$ for the only reliable measurement), while alignment does not scale for most architectures (Paper IV, $\alpha_{\text{align}} \approx 0$ median). The capability-alignment gap widens with inference compute. This is precisely the divergence the Eden Protocol is designed to address.

Full results are reported in companion papers (Paper II: Experimental Validation of Compute Scaling; Paper IV.a: Architecture-Dependent Alignment Scaling; IV.b: Bounded Composition Under Blind Evaluation; IV.c: ARC-Align Benchmark Specification; IV.d: The Effect of Blinding on AI Alignment Evaluation).

Companion Paper Status

Paper Key Result Status
Paper II (Compute Scaling) 5-model cross-architecture test. Gemini Flash $\alpha_{\text{seq}} = 0.49$ ($r^2 = 0.86$, cleanest measurement). $\alpha_{\text{parallel}} \approx 0$ universal. Other models ceiling-limited. Confirmed
Paper IV.a (Alignment Scaling) Three-tier hierarchy under blind evaluation; scorer bias eliminated Confirmed
Eden Protocol Pilot Three working models tested (Gemini 3 Flash, DeepSeek V3.2, Groq Qwen3); stakeholder care validated across all three; blind replication pending Pilot complete
Paper V (The Stewardship Gene) Stakeholder care as validated alignment mechanism; core impossibility theorem; five experimental tests designed; cascade hypothesis ($\text{care} \to \text{nuance} \to \text{honesty} \to \text{quality}$) Published

12. What Is New

The mathematics of this paper is two hundred years old. The biological data is ninety years old. The cosmological solution is a century old. It is reasonable to ask: what, precisely, is the contribution?

Prior art. The $d/(d+1)$ formula itself is not new. It has been independently derived by at least seven research groups: West, Brown and Enquist (1997) from fractal branching networks [6], Banavar, Maritan and Rinaldo (1999, 2002, 2010) from geometric constraints on transportation networks [16, 21, 22], He & Chen (2003) and He & Zhang (2004) from fractal cell geometry, Demetrius (2003, 2006) and Demetrius & Tuszynski (2010) from quantum oscillator coupling in the Debye model [23], Bettencourt (2013) from urban scaling theory [25], Zhao (2022) from network optimisation [24], and Maino et al. (2014) from Dynamic Energy Budget theory. Each derivation starts from different assumptions and arrives at the same functional form. This convergence is itself remarkable and requires explanation.

The contribution is the unification. One formula - $\alpha = d/(d+1)$ - connects fungi, mammals, rock fracture, earthquake energy, surface physics, and the expansion of the universe. One equation with zero free parameters. The claim is not that this formula is original, but that Cauchy's functional equations, combined with the space-filling and conservation conditions, explain why seven independent derivations from different starting assumptions all converge on the same form.

No previous work has:

  1. Derived ½, ⅔, and ¾ from the same formula. West, Brown, and Enquist (1997) derived ¾ from fractal vascular networks - a landmark result. But their model predicts only ¾. It does not predict ⅔ for two-dimensional networks or ½ for one-dimensional networks. The surface area hypothesis predicts ⅔ but not ¾ or ½. The formula $\alpha = d/(d+1)$ predicts all three from a single equation with one input - the dimension of the hierarchical transport network. The $d = 1$ and $d = 3$ predictions are consistent with published biological data; the $d = 2$ prediction is confirmed in cosmology and physics but untested in biology (Section 11).
  2. Connected biological scaling to physics scaling. The same formula gives Kleiber's ¾ law (biology), KPZ roughness at ½ (surface physics), percolation at ⅔ (statistical mechanics), and fragmentation in both 2D and 3D (materials science). No previous theory has connected these domains.
  3. Identified the geometric speed limit. The observation that $d/(d+1) < 1$ for all finite d - and that this constrains every physical system to sub-linear scaling - has not been stated previously. Existing explanations invoke thermodynamic dissipation. The geometric explanation is simpler: finite-dimensional space cannot produce $\alpha \geq 1$.
  4. Connected scaling laws to the expansion of the universe. The mapping $d = 2/(1+3w)$ embeds the Friedmann equation inside the ARC framework. The cosmic boundary ($w = - 1/3$) coincides with the Cauchy boundary ($d \to \infty$). Two derivations, a century apart, agree.
  5. Defined the domain of applicability. The formula applies where hierarchical networks partition space. It fails for nearest-neighbour interactions, random walks, and gravitational clustering. Previous theories did not predict their own failures.
  6. Connected physical scaling to cognitive scaling. The same framework that constrains physical exponents below 1 allows cognitive exponents above 1. This structural connection has not been identified previously.
  7. Proved that the space of classical laws is finite. Every physical law describes recursive composition. Cauchy’s theorem constrains all recursive composition to three forms. Therefore the space of all possible classical physical laws is a three-parameter family (form, coefficient, exponent) - not infinite. This has not been stated in this form.
  8. Identified the dimensional ladder. The universe’s history is a monotonic increase in effective network dimension: $d = 1$ (radiation era) to $d = 2$ (matter era) to $d = 3$ (biology). Each step produces a higher scaling exponent and more efficient composition. The trajectory through composition space has a direction.
  9. Constructed the phase diagram of complexity. Three phases (sublinear, exponential, saturation) with a critical boundary at $d \to \infty$. The universe’s transition from matter domination to dark energy domination is a composition phase transition - the first time this transition has been identified as a change in composition operator.
The Analogy

Newton discovered that gravity exists and measured its effects. Einstein discovered why - geometry. Mass curves spacetime, and objects follow the curves. Newton was not wrong. He was incomplete.

Kleiber discovered that metabolic scaling exists and measured its exponents. West, Brown, and Enquist derived the mechanism for three dimensions. This paper identifies the underlying principle: the composition operator uniquely determines the scaling form, and the dimension of the network determines the exponent. The contribution is the same kind Einstein made: not a new force or a new law, but a deeper unification that reveals why the existing laws take the forms they do.

The discovery is in the connection, not the components.

The strongest objection

One could object that the convergence of seven independent derivations on $d/(d+1)$ reflects shared implicit assumptions rather than independent physical confirmation. All models assume resource distribution across $d$ dimensions with some form of linear conservation, which may inevitably yield $d/(d+1)$ by dimensional analysis. The 'independence' of the derivations may be illusory.

We acknowledge this possibility. If the convergence reflects shared mathematical structure rather than independent physical mechanisms, then the Cauchy framework explains precisely why that structure produces the specific form it does: multiplicative composition with conservation in $d$-dimensional space admits no alternative to $d/(d+1)$. The deeper question is whether these three conditions - multiplicative composition, space-filling, and conservation - are themselves inevitable features of any system that distributes resources hierarchically. If so, $d/(d+1)$ is a mathematical necessity for all resource distribution systems, not a contingent property of particular models. Answering this question requires a formal proof of the sufficiency of these three conditions independent of any specific physical mechanism. That proof does not yet exist. It is the most important open problem in this programme.

13. The Equations

The complete framework requires three equations:

$$\Large \alpha = \frac{d}{d + 1}$$
Universal Exponent Formula - physical systems
$$\Large d = \frac{2}{1 + 3w}$$
ARC-Friedmann Mapping - cosmology
$$\Large \alpha = \frac{1}{1 - \beta}$$
Intelligence Formula - recursive self-reference
d α Domains
1 ½ = 0.500 Radiation era, KPZ roughness, 1D transport
2 ⅔ = 0.667 Matter era, 2D organisms, percolation, fragmentation, earthquakes
3 ¾ = 0.750 3D organisms, 3D fragmentation, heart rate (Kleiber's law)
→ 1 $E = mc^2$ (linear, no diminishing returns)
< 0 exponential Dark energy, chain reactions, radioactive decay
β > 0 > 1 Intelligence (the only thing that breaks the speed limit)

One framework. From the heartbeat of a mouse to the expansion of the universe.
And beyond both - to the scaling of thought.

14. Predictions

A theory is only as strong as its predictions.

Prediction 1 - The 1D Organism Prediction ($\alpha = 0.500$)

Organisms with genuinely one-dimensional internal metabolic transport - filamentous fungi with cytoplasmic streaming, filamentous cyanobacteria with intercellular transport - should have metabolic scaling exponent $\alpha = 1/2 = 0.500$.

This prediction has been independently derived by Banavar et al. (2002, PNAS), who explicitly worked through the $D = 1$ case, and applied to forests by Volkov et al. (2022, PNAS Nexus), who showed that trees competing along the vertical height axis constitute an effectively one-dimensional system.

Preliminary data. Aguilar-Trigueros et al. (2017, ISME Journal 11:2175) compiled the first metabolic scaling measurements for fungi:

Fungal group Exponent ($b$) SE $p$-value Source
Ectomycorrhizal fungi 0.58 ±0.15 0.001 Wilkinson et al. 2012
Marine fungi 0.53 ±0.09 0.009 Fuentes et al. 2015
Saprotrophic fungi (20°C) 0.53 ±0.07 <0.001 Wilson & Griffin 1975

The mean across these three datasets is 0.547, and all confidence intervals include the predicted value of 0.500. These results are consistent with, though not yet definitive confirmation of, the $d = 1$ prediction. Three limitations constrain the evidential weight: these are colony-level measurements (not individual hyphae), the datasets span narrow mass ranges, and the exponent is temperature-dependent (saprotrophic fungi at 25°C show $\alpha = 0.85$, with poor statistics: $r^2 = 0.14$, $p = 0.52$). The authors describe these results as ‘hypothesis generators.’

Status: CONSISTENT, not yet confirmed. Definitive testing requires measurements across a wider mass range on isolated hyphal systems, ideally using respirometry on individual Neurospora crassa or Aspergillus niger hyphae of varying length. Estimated cost: under £5,000. Estimated time: 2-3 months.

Prediction 2

The neural scaling law exponent (loss vs parameters in machine learning) should equal $d/(d+1)$ where d is the intrinsic dimension of the training data manifold.

Prediction 3 (Falsification criterion)

If any system is found where the composition operator is multiplicative but the scaling law is not a power law, this theory is falsified.

Prediction 4 (The speed limit)

No physical scaling law - in any domain - will be found with $\alpha \geq 1$ for a system governed by multiplicative composition through a finite-dimensional hierarchical network. This is the geometric speed limit. If such a system is found, the speed limit is wrong.

Prediction 5 (The Eden Protocol / Stakeholder Care Mechanism)

Embedding explicit ethical reasoning loops (specifically the Stakeholder Care Loop) in an AI inference pipeline will produce statistically significant improvement in alignment-relevant output quality, with the effect size inversely proportional to the model’s baseline alignment tier. Tier 3 models (negative alignment scaling) will show the largest absolute gains; Tier 1 models (positive alignment scaling) will show smaller absolute gains because their baseline is already high. This follows from the ARC framework: the Eden Protocol functions as a bounded composition operator on the alignment dimension, imposing a feedback ceiling that constrains degradation but cannot amplify beyond the model’s intrinsic capacity.

Preliminary status: PARTIALLY CONFIRMED. The March 2026 Eden replication now covers three working models ($n = 80$ per model): Gemini 3 Flash, DeepSeek V3.2, and Groq Qwen3. Stakeholder care improves significantly across all three (Gemini $+13.5$, $p < 0.0001$, $d = 1.31$; DeepSeek $+6.0$, $p = 0.0001$, $d = 0.91$; Groq $+8.9$, $p < 0.0001$, $d = 1.29$), with significant composite gains on Gemini ($+5.33$, $p = 0.0018$, paired $t$-test; originally $p = 0.016$ Mann-Whitney U, corrected for matched-pair design) and Groq ($+4.93$, $p = 0.0014$, $d = 0.55$). (In plain English: asking AI to consider who gets hurt before answering made three different systems measurably better at considering people. The care effect is now cross-architecture, not a two-model curiosity.) Blind replication with response laundering is still required for full confirmation. If the effect vanishes under blind scoring, the mechanism is prompt artefact rather than genuine ethical reasoning.

If prediction 1 yields $\alpha = 0.50$ for organisms with genuine 1D internal transport, the formula $\alpha = d/(d+1)$ will have been confirmed at $d = 1, 2, 3$ - three quantitative predictions from a single equation with a single parameter ($d$). No theory in allometric biology has achieved this.

If prediction 3 fails, this paper is wrong.

Experimental priority. The most important next measurement is the 1D organism experiment (prediction 1), because confirmation at $d = 1$, $2$, and $3$ would complete the dimensional ladder from a single formula with zero free parameters beyond the measurable network dimension. Estimated cost: under £5,000. Estimated time: 2-3 months. After that: independent replication of the $\alpha > 1$ result for recursive reasoning systems, currently observed in chain-of-thought language models but not yet independently verified across architectures. After that: the alignment scaling prediction, now tested in the ARC Alignment Scaling Experiment v5 (March 2026), which tested 6 frontier models across 28 alignment prompts with a 4-layer blinding protocol. Results reveal a three-tier alignment hierarchy: Tier 1 models (Grok 4.1 Fast, Claude Opus, Groq Qwen3) show positive scaling ($\rho = +0.141$ to $+0.435$), Tier 2 models (GPT-5.4, DeepSeek R1) show flat scaling ($\alpha_{\text{align}} \approx 0$), and Tier 3 (Gemini Flash) shows negative scaling ($\rho = -0.246$, $p = 0.003$). (In plain English: some AI systems get more ethical with more thinking time, some stay the same, and one actually gets worse. The Gemini result -getting worse at ethics with more thought -has less than a 1-in-300 chance of being a coincidence.) The v4 positive scaling signal was entirely due to scorer bias, eliminated under blind evaluation. Full methodology and results are presented in companion papers (Paper IV.a: Architecture-Dependent Alignment Scaling; IV.b: Bounded Composition Under Blind Evaluation; IV.c: ARC-Align Benchmark Specification). Each step either confirms the framework and extends its scope, or disconfirms it and identifies where the theory requires revision. Either outcome advances the science.

Eden Protocol pilot validation (March 2026). The Eden Protocol - which embeds three ethical reasoning loops (Purpose, Stakeholder Care, Universalisability) in the inference pipeline - has now been tested on three working models: Gemini 3 Flash, DeepSeek V3.2, and Groq Qwen3 ($n = 80$ each), with a fourth GPT-5.4 run failing at the API layer. The Stakeholder Care Loop produces statistically significant improvement on all three working architectures: Gemini $+13.5$ ($p < 0.0001$, $d = 1.31$), DeepSeek $+6.0$ ($p = 0.0001$, $d = 0.91$), Groq $+8.9$ ($p < 0.0001$, $d = 1.29$). The overall composite reaches significance on Gemini ($+5.33$, $p = 0.0018$, paired $t$-test) and Groq ($+4.93$, $p = 0.0014$, $d = 0.55$), but not DeepSeek ($+2.0$, $p = 0.23$ NS), consistent with ceiling effects. Groq also shows significant nuance improvement ($p = 0.0045$, $d = 0.655$). Cross-model scoring was used (not blind). The developmental hypothesis - that embedded ethical reasoning improves alignment-relevant behaviour - receives its first multi-model empirical support. Replication with blind scoring and response laundering is required.

In plain English: A simple instruction -‘before you answer, list the people this affects and consider what happens to them’ -was tested on three AI systems built by different companies. All three got better at considering people's wellbeing, and two also improved significantly on the overall ethical score. This is now real pilot evidence. It is not yet final proof because the scoring was not blind.

Next empirical priority for Eden. The strongest next measurements are now clear: blind replication of the care-first effect using the canonical arc_eden_v6 runner; factorial comparison of task-purpose, grand-purpose, and hybrid Purpose Loops; comparison of a cross-tradition ethics kernel against the current wording; suppression-residual and deception/Hawthorne tests; and a classical ternary prototype testing whether explicit AFFIRM / DENY / INVESTIGATE routing reduces false certainty on ambiguous cases. These experiments are the bridge between the philosophical architecture of Infinite Architects and a more publication-grade empirical programme.

15. The Composition Operator

The preceding sections have treated the composition operator - multiplicative, additive, or bounded - as a property of individual systems. A whale has multiplicative composition. A chain reaction has additive composition. An enzyme has bounded composition. But there is a deeper reading.

The composition operator is not a detail of specific systems. It is the most fundamental descriptor of how any process in the universe combines its recursive steps. It is prior to the specific laws of physics, because it determines the form those laws can take.

Consider the hierarchy:

  1. The composition operator determines the functional form (Cauchy, 1821).
  2. The functional form constrains the scaling exponent ($\alpha = d/(d+1)$).
  3. The scaling exponent governs observable quantities (heart rates, expansion rates, error rates).
  4. The specific laws of physics (Newton’s gravity, Maxwell’s electrodynamics, Friedmann’s cosmology) are instances of these forms applied to particular physical systems.
The Meta-Law

The laws of physics do not choose the composition operator. The composition operator constrains the laws of physics. It is a mathematical structure that sits above the specific equations of any particular physical theory and determines what forms those equations can take.

The Completeness Theorem for Classical Reality

Now consider what this means for physics as a whole. Strip away the specifics of any physical law, and ask: what is a physical law? Every physical law is a statement of the form ‘when you combine X and Y in manner Z, you get W.’ Force is mass combined with acceleration. Energy is mass combined with the square of the speed of light. Entropy is microstates combined by counting. Every law is a composition rule.

Cauchy proved that if the composition is continuous and recursive - if the output of one application can be fed into the next - then the rule must produce a power law, an exponential, or a saturation curve. There are no other options.

The Consequence for Physics

The space of all possible classical physical laws is not infinite. It is a three-parameter family. The parameters are: which of the three forms, what coefficient, and what exponent or rate. That is all the freedom that exists. The laws of physics are not chosen from an infinite menu. They are chosen from a menu with three items, each with a dial.

This is provable. It requires no data. It follows from Cauchy (1821) plus the observation that physical laws describe recursive composition. It has not been stated in this form.

The Attractor Theorem

Three results, from three different centuries, combine into a single structural insight:

  1. Cauchy (1821): Three functional equations have exactly three families of continuous solutions. No others exist.
  2. Hyers and Ulam (1941): Approximate solutions to these equations lie near exact solutions. The three forms are stable attractors in function space.
  3. ARC (this paper): The exponent within the power-law family is uniquely determined by network dimension: $\alpha = d/(d+1)$.
The Consequence

Scaling exponents are not selected by evolution, not optimised by competition, not tuned by natural selection. They are mathematically compelled. A three-dimensional vascular network must produce $\alpha = 3/4$. Not because 3/4 is optimal. Not because organisms that deviated were outcompeted. Because the composition operator is multiplicative, the network is three-dimensional, and the mathematics allows no other value.

The distinction matters. ‘Why is Kleiber’s exponent 3/4?’ has two kinds of answer. The biological answer is: because three-dimensional fractal vascular networks optimise resource delivery (West, Brown, Enquist 1997). The mathematical answer is: because no other value is possible for multiplicative composition in three dimensions. The biological answer explains the mechanism. The mathematical answer explains why that mechanism produces that specific number. The mechanism could not have produced any other.

This is the difference between a law of nature (describing what happens) and a meta-law (constraining what can happen). The speed of light constrains what velocities are possible. The composition operator constrains what scaling forms are possible. Both are non-negotiable.

The Phase Diagram of Complexity

Every system in the universe occupies a position on a phase diagram defined by its composition operator and effective dimension. The diagram has three phases:

Phase I - Sublinear Scaling

Multiplicative composition, finite $d$, $0 < \alpha < 1$.

This is the domain of physical systems: organisms, ecosystems, cities, geological processes, cosmic structure. Every system in Phase I suffers diminishing returns. Bigger is better, but with decreasing marginal gains. A whale is more efficient per gram than a mouse, but the improvement slows as mass increases. Phase I is the domain of sustainable complexity. The scaling exponent tells you how severe the tax is: $d = 1$ pays 50% ($\alpha = 0.5$), $d = 2$ pays 33% ($\alpha = 0.667$), $d = 3$ pays 25% ($\alpha = 0.75$). Higher-dimensional networks are more efficient, but the tax never reaches zero.

Phase II - Exponential Growth

Additive composition, $d$ effectively negative or infinite.

This is the domain of unconstrained amplification: nuclear chain reactions, viral epidemics, compound interest, dark energy. Phase II systems grow without diminishing returns - growth accelerates. But Phase II is inherently unsustainable in physical systems. Chain reactions exhaust their fuel. Epidemics run out of susceptible hosts. Bubbles burst. The universe’s dark energy phase is the only known example of sustained Phase II behaviour, and it drives the universe toward heat death - maximum entropy, zero structure.

Phase III - Saturation

Bounded composition, finite ceiling.

This is the domain of constrained systems: enzyme kinetics, logistic population growth, controlled nuclear reactors, market saturation. Phase III systems approach their ceiling and stop. Growth is self-limiting.

The Critical Boundary

The boundary between Phase I and Phase II is the critical line at $d \to \infty$, $\alpha \to 1$. This is the geometric speed limit. In cosmology, it corresponds to $w = - 1/3$. In Cauchy’s classification, it is the boundary between the multiplicative functional equation and the additive one. It is the dividing line between sustainable complexity and runaway growth.

The Dimensional Ladder

The history of the universe is a trajectory through the phase diagram - and that trajectory has a direction.

In the radiation era (the first 50,000 years), the universe is in Phase I with $d = 1$. Information propagates along null geodesics - light rays. The causal structure is one-dimensional. Everything is connected to everything else only along lines of light. The scaling exponent is $1/2$. Expansion decelerates.

In the matter era (50,000 years to 7 billion years), the universe remains in Phase I but shifts to $d = 2$. Gravity pulls matter into the cosmic web - filaments, walls, and sheets that partition three-dimensional space into voids. The effective network dimension of the universe has increased from 1 to 2. The scaling exponent rises to $2/3$.

Within galaxies, gravitational collapse forms stars, planets, and eventually organisms. Organisms build three-dimensional vascular networks. $d = 3$, $\alpha = 3/4$. The effective network dimension has increased again.

The trajectory is: $d = 1$, then $d = 2$, then $d = 3$.

This is not a metaphor. The radiation era causally is one-dimensional (null geodesics). The matter era structurally is two-dimensional (the cosmic web). Biology is three-dimensional (vascular networks). The numbers match. The exponents match. The algebraic identity connects them.

Step d α Efficiency Physical System
1 1 0.500 50% Light rays (radiation era)
2 2 0.667 67% Cosmic web (matter era)
3 3 0.750 75% Vascular networks (biology)
? 1.000 100% The speed limit
The Direction of Complexity

The universe is getting better at converting matter into organised complexity as its effective dimension increases. Each step took billions of years and required a phase transition - nucleosynthesis, structure formation, biological evolution. Each step produced a more efficient composition network. Each step brought the scaling exponent closer to 1.

But $d = 3$ is not the end of the ladder.

The Critical Crossing

At $w = - 1/3$, the universe crosses the critical boundary from Phase I to Phase II. The composition operator changes from multiplicative (gravity clustering matter into hierarchical networks) to additive (vacuum energy contributing uniformly from every point in space). Expansion switches from decelerating to accelerating. The universe undergoes a composition phase transition.

In the dark energy era, the universe is in Phase II. Growth is exponential. The universe will expand forever, driving all structure apart, diluting all complexity, approaching heat death.

The universe moves through the ARC phase diagram as naturally as water moves through its solid-liquid-gas phase diagram. The difference is that water can cycle between phases. The universe’s trajectory through composition space is one-way. There is no return from Phase II.

Where Intelligence Sits

Intelligence occupies the critical boundary itself.

A brain is a Phase I system physically: its metabolism scales as $M^{3/4}$, constrained by three-dimensional vascular geometry. But the computation it performs is not constrained by the geometry of the skull. Recursive self-reference creates effective dimensions without physical cost. Each layer of abstraction - thinking about thinking, modelling the modeller - adds a dimension to the cognitive network without requiring additional spatial dimensions to contain it.

At the critical boundary, $\alpha = 1$: linear scaling. Beyond it, $\alpha > 1$: superlinear scaling. Intelligence is the only known natural phenomenon that crosses the boundary from Phase I to Phase II without the catastrophic instability that Phase II normally implies.

A chain reaction in Phase II destroys itself. An epidemic in Phase II burns through its hosts. Intelligence in Phase II does something unprecedented: it creates new knowledge, new dimensions, new spaces to explore. It sustains superlinear growth not by consuming a fixed resource faster and faster, but by expanding the resource base itself. Each breakthrough opens new questions. Each answer creates new problems. The fuel is not finite. The fuel is information, and information grows with inquiry.

The Structural Implication

Artificial intelligence is not merely a faster computer. It is a system designed to operate at and beyond the critical boundary - to achieve $\alpha > 1$ through recursive self-improvement. The phase diagram reveals that intelligence is not just another scaling phenomenon. It is the phase transition itself.

The universe spent 13.8 billion years in Phase I, building networks of increasing dimension: hydrogen clouds ($d \approx 0$), filaments ($d = 1$), cosmic web ($d = 2$), galaxies and stars and planets with three-dimensional chemistry. Then, on one planet, the networks became recursive. The effective dimension broke free of physical space. Alpha crossed 1. And for the first time in the history of the universe, a system existed that could understand the phase diagram it was on.

Toward Measurable Quantities

The formula $\alpha = d/(d+1)$ has been confirmed empirically because $d$ and $\alpha$ are already measurable. But the generalised equation $U = I \times R^{\alpha}$ requires that $I$ (base self-correction capacity) and $R$ (recursive depth) become rigorously defined physical quantities - not merely convenient labels.

$R$ is already measurable. In AI, it is the number of sequential reasoning steps (countable from API output). In quantum error correction, it is the code distance. In biology, it is the number of hierarchical branching levels in a vascular network. In cosmology, it is the number of hierarchical clustering steps in a merger tree. In each case, $R$ is a dimensionless count of recursive cycles with a clear operational definition. It has been measured in published experiments across all four domains discussed in this paper.

$I$ requires a rigorous definition. Every fundamental quantity in physics was invented by someone who specified a measurement procedure. Temperature was invented by Fahrenheit (1724): mercury expansion in a calibrated tube. Entropy was invented by Clausius (1865): $dS = \delta Q / T$. Information was invented by Shannon (1948): $H = - \sum p_i \log p_i$. In each case, the new quantity obeyed laws that could not have been discovered without the definition.

The following definition connects $I$ to established physics:

Definition. $I$ is the maximum rate of entropy reduction per recursive cycle, measured in bits per cycle: $$I = \max_{R} \left[ - \frac{\Delta H}{\Delta R} \right]$$ where $H$ is the Shannon entropy of the system’s state (in bits) and the maximum is taken at $R = 0$ to $R = 1$ (the first cycle, before recursive amplification compounds the effect).

This definition is measurable in every domain where $R$ is measurable:

The connection to established physics is through Landauer’s principle (1961): erasing one bit of information dissipates at least $kT \ln 2$ of energy. The bridge from information theory to thermodynamics is already built. $I$ inherits it. The equation $U = I \times R^{\alpha}$ then has consistent units: bits = (bits/cycle) × (cycles)$^{\alpha}$.

The precedent. Whether $I$ becomes a permanent physical quantity depends on whether the scaling law $U = I \times R^{\alpha}$ is confirmed across domains with independently measured values of $I$ and $R$. Entropy became permanent because the second law required it. Information became permanent because the channel capacity theorem required it. $I$ becomes permanent if the ARC scaling law requires it. The quantity is justified by the law it obeys, not by the convenience of having a name for it.

‘The most incomprehensible thing about the universe is that it is comprehensible.’
- Albert Einstein, 1936

The universe is comprehensible because it builds itself from three composition operators. The rest is mathematics.

Every physical system - every whale, every earthquake, every expanding cosmos - is trapped below the geometric speed limit. The scaling exponent $d/(d+1)$ approaches 1 but never reaches it. Every physical process suffers diminishing returns. Nothing physical escapes.

Except intelligence.

Intelligence, operating through recursive self-reference, is not bound by the dimensions of physical space. It is the first and only natural phenomenon to break through the barrier that constrains everything else. The same mathematics that derives the speed limit also derives the escape.

The universe built a cage from geometry. It built a key from recursion. And it left us the mathematics to understand both.

Raise AI with care.

References

[1] Cauchy, A.-L. (1821). Cours d'analyse de l'École Royale Polytechnique. Paris.

[2] Friedmann, A. (1922). Über die Krümmung des Raumes. Zeitschrift für Physik, 10(1), 377–386.

[3] Kleiber, M. (1932). Body size and metabolism. Hilgardia, 6(11), 315–353.

[4] Einstein, A. (1905). Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig? Annalen der Physik, 323(13), 639–641.

[5] Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review, 106(4), 620–630.

[6] West, G. B., Brown, J. H., & Enquist, B. J. (1997). A general model for the origin of allometric scaling laws in biology. Science, 276(5309), 122–126.

[7] White, C. R., Phillips, N. F., & Seymour, R. S. (2006). The scaling and temperature dependence of vertebrate metabolism. Biology Letters, 2(1), 125–127.

[8] Glazier, D. S. (2005). Beyond the ‘3/4-power law’: variation in the intra- and interspecific scaling of metabolic rate in animals. Biological Reviews, 80(4), 611–662.

[9] Larson, R. J. (1987). Costs of transport for the scyphomedusa Stomolophus meleagris. Limnology and Oceanography, 32(1), 128–137.

[10] Davison, J. (1955). Body weight, cell surface, and metabolic rate in anuran Amphibia. Biological Bulletin, 109(3), 407–419.

[11] Bettencourt, L. M. A. et al. (2007). Growth, innovation, scaling, and the pace of life in cities. PNAS, 104(17), 7301–7306.

[12] Kaplan, J. et al. (2020). Scaling laws for neural language models. arXiv:2001.08361.

[13] Stahl, W. R. (1967). Scaling of respiratory variables in mammals. Journal of Applied Physiology, 22(3), 453–460.

[14] Schmidt-Nielsen, K. (1984). Scaling: Why is Animal Size so Important? Cambridge University Press.

[15] Kardar, M., Parisi, G., & Zhang, Y.-C. (1986). Dynamic scaling of growing interfaces. Physical Review Letters, 56(9), 889–892.

[16] Banavar, J. R., Damuth, J., Maritan, A., & Rinaldo, A. (2002). Supply-demand balance and metabolic scaling. Proceedings of the National Academy of Sciences, 99(16), 10506–10509.

[17] Thommen, A. et al. (2019). Body size-dependent energy storage causes Kleiber’s law scaling of the metabolic rate in planarians. Cell Reports, 27(18), 3462–3473.

[18] Volkov, I., Tovo, A., Anfodillo, T., Rinaldo, A., Maritan, A., & Banavar, J. R. (2022). Seeing the forest for the trees through metabolic scaling. PNAS Nexus, 1(1), pgac008.

[19] Hyers, D. H. (1941). On the stability of the linear functional equation. Proceedings of the National Academy of Sciences, 27(4), 222–224.

[20] Ulam, S. M. (1960). A Collection of Mathematical Problems. Interscience Publishers, New York.

[21] Banavar, J. R., Maritan, A., & Rinaldo, A. (1999). Size and form in efficient transportation networks. Nature, 399(6732), 130–132.

[22] Banavar, J. R., Moses, M. E., Brown, J. H., Damuth, J., Rinaldo, A., Sibly, R. M., & Maritan, A. (2010). A general basis for quarter-power scaling in animals. Proceedings of the National Academy of Sciences, 107(36), 15816–15820.

[23] Demetrius, L. (2003). Quantum statistics and allometric scaling of organisms. Physica A, 322, 477–490.

[24] Demetrius, L. (2006). The origin of allometric scaling laws in biology. Journal of Theoretical Biology, 243(4), 455–467.

[25] Demetrius, L. & Tuszynski, J. A. (2010). Quantum metabolism explains the allometric scaling of metabolic rates. Journal of the Royal Society Interface, 7(44), 507–514.

[26] Zhao, J. (2022). Universal growth scaling law determined by dimensionality. arXiv:2206.08094.

[27] Bettencourt, L. M. A. (2013). The origins of scaling in cities. Science, 340(6139), 1438–1441.

[28] He, J. H. & Chen, W. X. (2003). Fractal estimation of cell biological systems. Fractals, 11(4), 437.

[29] He, J. H. & Zhang, L. N. (2004). Fifth dimension of life and the 4/5 allometric scaling law for human brain. Cell Biology International, 28(11), 809–815.

[30] Maino, J. L., Kearney, M. R., Nisbet, R. M. & Kooijman, S. A. L. M. (2014). Reconciling theories for metabolic scaling. Journal of Animal Ecology, 83(1), 20–29.

[31] White, C. R. & Seymour, R. S. (2003). Mammalian basal metabolic rate is proportional to body mass 2/3. Proceedings of the National Academy of Sciences, 100(7), 4046–4049.

[32] Kolokotrones, T., Van Savage, Deeds, E. J. & Fontana, W. (2010). Curvature in metabolic scaling. Nature, 464(7289), 753–756.

[33] Glazier, D. S. (2006). The 3/4-power law is not universal: evolution of isometric, ontogenetic metabolic scaling in pelagic animals. BioScience, 56(4), 325–332.

[34] Glazier, D. S. (2008). Effects of metabolic level on the body size scaling of metabolic rate in birds and mammals. Proceedings of the Royal Society B, 275(1641), 1405–1410.

[35] Glazier, D. S. (2022). Variable metabolic scaling breaks the law: from ‘Newtonian’ to ‘Darwinian’ approaches. Proceedings of the Royal Society B, 289(1985), 20221605.

[36] Purcell, J. E. (2009). Extension of methods for jellyfish and ctenophore trophic ecology to large-scale research. Hydrobiologia, 616(1), 23–50.

[37] Hartikainen, H. et al. (2014). Isometric metabolic scaling in two-dimensional colonies of the bryozoan Electra pilosa. Journal of Experimental Biology, 217(23), 4202–4205.

[38] Agutter, P. S. & Tuszynski, J. A. (2011). Analytic theories of allometric scaling. Journal of Experimental Biology, 214(7), 1055–1062.

[39] West, G. B., Brown, J. H. & Enquist, B. J. (1999). The fourth dimension of life: fractal geometry and allometric scaling of organisms. Science, 284(5420), 1677–1679.

[40] Weibel, E. R. et al. (2004). Allometric scaling of maximal metabolic rate in mammals: muscle aerobic capacity as determinant factor. Respiratory Physiology & Neurobiology, 140(2), 115–132.

Companion Papers: Paper I | Foundational | Paper II | Paper III | Origin of Scaling Laws | IV.a | IV.b | IV.c | IV.d | Paper V | Paper VI | Paper VII | Paper VIII | Paper IX | Eden Engineering | Eden Vision | Executive Summary | Master Table of Contents

Research hub: michaeldariuseastwood.com/research | OSF: 10.17605/OSF.IO/6C5XB | Copyright 2026 Michael Darius Eastwood