10a. Replicator Dynamics Simulator

Strategies that perform above the population average grow in proportion; those below average shrink. Apply to classic games like Hawk-Dove, Rock-Paper-Scissors, and Coordination -- each producing fundamentally different dynamics.

Game Selection

Payoff Matrix

Population

Time0
Dominant--
EquilibriumNo
Avg Fitness--

How It Works

Replicator dynamics: dx_i/dt = x_i * (f_i - f_avg), where x_i is strategy proportion and f_i is its fitness against the current population mix.

  • Hawk-Dove: Hawks fight for resources, Doves share. Too many Hawks means costly fights -- equilibrium emerges at a mixed population.
  • Rock-Paper-Scissors: Cyclic dominance with no stable equilibrium. Populations orbit forever (or spiral inward/outward).
  • Coordination: Two equilibria (all A or all B). Which one depends on initial conditions -- path dependence matters.
  • Key: The payoff structure of the game determines whether we get stable mixing, cycling, or winner-take-all.

Real World Examples

  • 🦅 Hawk-Dove in animal territorial contests (escalation vs retreat)
  • 🏢 Aggressive vs cooperative firms competing in markets
  • 🎓 Cultural evolution of social norms over generations
  • 💊 Antibiotic resistance cycling through bacterial populations
  • 👔 Fashion cycles -- styles go in and out as they become too common
  • 🌍 Evolutionary biology -- frequency-dependent selection

Key Insight

"The structure of the game determines everything. Hawk-Dove reaches a stable mix, Rock-Paper-Scissors cycles forever, and Coordination locks in -- the same evolutionary dynamics, but three completely different outcomes depending on the payoff matrix."

10b. Fisher's Fundamental Theorem of Natural Selection

The rate of increase in mean fitness of a population equals the genetic variance in fitness. More variation means faster adaptation -- but this is not the same as Six Sigma's "reduce variation to improve quality."

Controls

FISHER'S THEOREM
d(mean fitness)/dt = Var(fitness)
-- Rate of Change
=  
-- Variance
Generation0
Mean Fitness--
Variance--
Adaptation Rate--

How It Works

Fisher's Fundamental Theorem (1930): the rate of increase in mean fitness of a population at any time equals the genetic variance in fitness at that time.

  • High variance = fast adaptation: A diverse population adapts quickly because selection has more material to work with.
  • Low variance = slow adaptation: An inbred, homogeneous population improves slowly -- even if current average fitness is high.
  • Selection reduces variance: Paradox -- adaptation consumes the very variation that drives it, slowing itself down over time.
  • vs Six Sigma: Six Sigma reduces variation in output (errors). Fisher says variation in approaches (exploration) speeds improvement. Both are correct -- vary strategies, standardize execution.

Real World Examples

  • 🧬 Diverse vs inbred populations -- cheetahs (low genetic diversity) adapt slowly to new diseases
  • 🚀 Startup ecosystems -- many diverse companies means faster innovation overall
  • 💡 Brainstorming -- the divergent thinking phase generates variation for selection
  • 🎯 A/B testing -- more variants tested means faster optimization
  • 🦠 Immune system diversity -- broader antibody repertoire fights novel pathogens better
  • 🏭 City economies -- diverse industry mix weathers shocks better (Jane Jacobs)

Key Insight

"Variation is the fuel of adaptation. Fisher's theorem says: more diversity = faster improvement. Six Sigma says: less variation = higher quality. Both are right -- you want variation in STRATEGIES (exploration) but consistency in EXECUTION (exploitation)."

10c. Many-Model Thinking / Diversity Prediction Theorem

The Diversity Prediction Theorem proves mathematically: Crowd Error = Average Individual Error - Prediction Diversity. Diverse crowds ALWAYS outperform their average member. The key is diversity of thought, not just quantity.

Controls

Models

Crowd Error--
Avg Indiv Error--
Diversity--
Improvement--
DIVERSITY PREDICTION THEOREM
Crowd Error = Avg Individual Error - Prediction Diversity
-- Crowd Error
=  
-- Avg Error
-  
-- Diversity

How It Works

The Diversity Prediction Theorem is an algebraic identity (not a statistical approximation -- it is always exactly true):

  • Crowd Error = squared error of the crowd's average prediction from truth
  • Avg Individual Error = average of each model's squared error from truth
  • Prediction Diversity = average squared distance of individual predictions from the crowd average
  • Since diversity is always >= 0, the crowd ALWAYS does at least as well as the average individual.
  • Key insight: Adding a model that thinks DIFFERENTLY helps more than adding a copy of the best model.

Real World Examples

  • 🌤 Weather forecasting -- ensemble of different atmospheric models beats any single one
  • 🎬 Netflix Prize -- winning team combined 800+ different models
  • 📈 Prediction markets -- aggregate diverse opinions into surprisingly accurate forecasts
  • 🩹 Medical diagnosis -- panels of specialists with different training outperform lone experts
  • 🗳 Election forecasting -- 538/Economist aggregate many polls with different methodologies
  • 👥 Jury decisions -- diverse juries make better decisions than homogeneous ones

Key Insight

"Diversity trumps ability. The 10th-best model that thinks differently contributes more to the crowd's accuracy than a second copy of the best model. This is mathematically guaranteed by the Diversity Prediction Theorem -- and it's why diverse teams, ensemble methods, and prediction markets consistently outperform individual experts."