![A Layman’s Guide to Complex Systems: Introducing MLM & Mantic A Layman’s Guide to Complex Systems: Introducing MLM & Mantic](https://i3.wp.com/miro.medium.com/v2/resize:fit:612/1*8lS8hgc--T9RwVmseBCnSA.jpeg?w=1920&resize=1920,1267&ssl=1)
A Layman’s Guide to Complex Systems: Introducing MLM & Mantic
Last Updated on February 11, 2025 by Editorial Team
Author(s): Cole Williams
Originally published on Towards AI.
Fitting MLM & Mantic Into the Pattern Architecture
Season 1, Episode 3
Introduced a framework for understanding complex systems through the following lenses:
- Linear Progression represents a sequential, cause-and-effect pathway (e.g., data to reward or past to future) — A → B → C → D → E → F
- Network Dynamics highlight interrelated parts and feedback loops, showing complex interactions in systems — A ↗ ↘ F | B ↑ ↓ E ← C ↖ ↙ D
- Paired Relationships show mutual influence and reciprocity between key components — A ⇄ D | B ⇄ E | C ⇄ F
- Data-Behavioral Systems that model how information, actions, and outcomes interact in a dynamic, interconnected process
- Temporal-Spatial Systems that model how time and space interact to shape experiences, transitions, and connections
- Bio-Cognitive Systems that model how knowledge, evolution, and adaptation interact to create growth, stasis, and emergence within cognitive and biological processes
The core argument is that understanding emerges not from rigid categorization, but from recognizing dynamic interactions, feedback loops, and reciprocal relationships.
The concept of emergence — how these interactions create properties that transcend their individual components. Just as data and incentives can create value that wasn’t present in either alone, the interaction of past and present can create possibilities that neither timeframe contained independently.
“The interaction of these elements creates properties that transcend their individual components, whether in the form of value creation or spatiotemporal experience.”
Attempted to definitively break the Pattern Framework by finding a system that cannot be mapped using its three core principles.
Key Methodology:
- Rigorous testing and validation (formal proofs)
- Exploring the intellectual heritage from Ancient Greek philosophical reasoning
- Challenging the framework’s universality through systematic break attempts
Paradoxical Findings:
- Every attempt to break the framework actually reinforced it
- The act of describing a system without patterns inevitably introduces patterns
- The framework appears to capture something fundamental about reality and cognition
Insights:
- Patterns exist at every scale of reality
- Human understanding is inherently pattern-seeking
- The framework is more adaptive and flexible than initially conceived
Expansion Models: Introduced eight alternative models that extend the original framework:
- Divergent-Convergent Network
- Cyclic Feedback Model
- Hierarchical Cascade Model
- Asymmetric Influence Model
- Starburst Interaction Model
- Recursive Growth Model
- Interdependent Loops Model
- Temporal-Spatial Grid
Philosophical Implications:
- The framework may not just model systems but might be intrinsic to how reality and cognition function
- Breaking the framework is potentially impossible due to the fundamental nature of pattern recognition
“Every attempt so far has reinforced rather than broken the framework. It demonstrates the framework’s flexibility, adaptability, and relevance across diverse domains.”
The episode transforms from a challenge to break a conceptual framework into a profound exploration of how humans understand complexity, suggesting that patterns are not just a tool of analysis but potentially a fundamental characteristic of existence itself.
E3: The Why, For Layman
I quit my job at Google about 8 months ago because I wanted to travel and work on my own projects. Essentially an attempt to make sense of the world and what I perceive as nothingness.
I eventually came to the realization that nothing is actually impossible, but rather we fit everything into some sort of box to understand whatever that thing may be, that box we fit this thing into, is typically too complex for others to understand and gets run through a litany of unnecessary systems to prove its worth or “rigor”.
Exploring further I realized that there was common theme, or pattern, that seemed to hold true throughout time, all the way back to ancient civilizations.
Our drive to understand, to realize, to make sense of this world created a form of value for every civilization before us. Whether it be Ancient Egypt, Rome, or Mesopotamia.
Mesopotamia for example, was fundamentally shaped by their development of astronomical observations and mathematical systems — they gave us the 60-minute hour, the 360-degree circle, and some of the earliest written laws. The population was motivated by a deep belief that understanding these patterns would help them align with divine will and achieve prosperity.
What made this particularly powerful was how they integrated this understanding into practical systems:
- Their priests/astronomers tracked celestial movements to predict floods and plan agriculture
- They developed complex writing systems (cuneiform) to record both practical and spiritual knowledge
- They built massive ziggurats as physical manifestations of their desire to bridge earth and heaven
- Their mathematical advances were driven by both practical needs (commerce) and spiritual beliefs (numerology)
But the reality is all of these civilizations fell due to eventual bottlenecks in knowledge transfer. Whether it be law, religion, or math, the drive to understand and build these systems eventually created a form of knowledge inequality that made these societies too rigid, inadequte of critical thinking, and eventually became the main driver to their collapse. Every. Single. One.
Priesthoods and Specialized Classes
- In Egypt, the priestly class closely guarded hieroglyphic knowledge
- Mesopotamian astronomical knowledge was confined to temple complexes
- Roman legal knowledge became increasingly concentrated among a small elite class of jurists
The Institutionalization of Understanding The process often followed a similar pattern:
- Initial insight/understanding leads to system creation
- System becomes codified and ritualized
- Gatekeepers emerge to “protect” the knowledge
- Original understanding gets lost in bureaucracy and ritual
- Society loses ability to adapt as understanding calcifies into dogma
The Self-Reinforcing Cycle
- Knowledge holders gain power
- Power is used to restrict knowledge further
- Society becomes increasingly stratified
- Innovation and adaptation become threatening to established order
- Critical thinking is discouraged or even punished
In the present day, we see the same patterns emerging:
The Technology Sector presents a perfect modern parallel. What started as a democratizing force (personal computers, open source, the internet) has evolved into increasingly complex, specialized knowledge systems:
- Cloud infrastructure so complex that entire companies depend on a handful of AWS architects
- AI systems that are becoming black boxes even to their creators
- Programming languages and frameworks that fragment knowledge into ever-more specialized niches
- Tech giants building walled gardens of proprietary knowledge and systems
The Financial System shows similar patterns:
- Complex derivatives and financial instruments that even experts struggle to fully understand
- High-frequency trading algorithms that operate beyond human comprehension
- Regulatory frameworks so intricate that compliance becomes its own specialized industry
- Cryptocurrency promising decentralization but creating new forms of technical gatekeeping
Academia and Research:
- Hyperspecialization making cross-disciplinary understanding increasingly rare
- Paywalled research preventing knowledge dissemination (RIP Aaron Swartz)
- Grant systems that reward incremental improvements over revolutionary thinking
- Peer review processes that can enforce orthodoxy over innovation
Even the Legal System:
- Laws so complex that justice becomes inaccessible without expensive specialists
- Patent systems that protect incumbents rather than innovation
- Regulatory compliance requiring armies of specialized consultants
- Legal language becoming increasingly divorced from common understanding
The parallel to ancient civilizations is striking — in each case, systems created to solve problems become so complex they begin creating new ones. The question becomes: in an age of unprecedented information access, are we paradoxically building new knowledge bottlenecks?
The answer is yes.
So, how do we fix it? It is impossible to break the systems of today due to their complexity, bureaucracy, and pure hold on society. They are integrated at every level; Micro, Meso, Macro, and Meta. But, it is possible to break such patterns at the Micro level, watch those pattern breaks ripple through to the Meta level, therefore creating emergent value otherwise not seen, or intended to be known.
Micro: Individual or localized effects.
Meso: Group-level or regional dynamics.
Macro: System-wide impacts.
Meta: Long-term evolution and paradigm shifts.
Multi-Layer Model
MLM = Σ(Wᵉ ⋅ Lᵉ ⋅ Iᵉ) ⋅ eⁿαᵗ
Where:
- Wᵉ (Weight): Represents the importance or significance of a specific layer or factor in the calculation. This suggests the formula is likely used in a multi-layered or multi-factor analysis.
- Lᵉ (Layer): Indicates the specific value or contribution of a particular layer. The formula is dealing with a complex system where different layers or components have distinct impacts. Micro, Meso, Macro, Meta.
- Iᵉ (Interaction): Captures the interaction or influence factor between different elements. The formula accounts for interdependencies or synergistic effects between layers.
- α (Growth/Scaling Factor): A parameter that controls the rate of growth or scaling of the overall calculation. This represents exponential growth, decay, or some other transformative process.
- t (Time/Evolution Stage): Represents the temporal dimension or stage of evolution in the system being analyzed.
Layman’s Version:
The Summation Σ represents the overall framework trying to capture different types of patterns:
- Linear Progression
- Network Dynamics
- Paired Relationships
For each attempted pattern break:
- Wᵉ (Weight) could represent how fundamental the break is
- Lᵉ (Layer) could represent which aspect of the framework it breaks
- Iᵉ (Interaction) could show how the break affects other parts of the framework
The exponential term eⁿα might represent how attempted breaks often create meta-patterns:
- As we try harder to break patterns (increasing n)
- We often end up creating more complex patterns (exponential growth)
- The α could represent how readily our break attempts become new patterns
Irony: The formula itself demonstrates why breaking patterns is so hard — even our attempts to measure “pattern-breaking” require a patterned mathematical framework
Quantifying pattern break attempts using:
MLM = Σ(Wᵉ ⋅ Lᵉ ⋅ Iᵉ) ⋅ eⁿαᵗ
Let’s take a few example break attempts and assign values:
Quantum Entanglement Break Attempt:
- Wᵉ = 0.9 (high weight because it’s a fundamental physical phenomenon)
- Lᵉ = 0.7 (strong break of classical causality)
- Iᵉ = 0.8 (strongly affects multiple framework aspects)
- n = 2 (creates second-order patterns)
- α = 0.5 (moderate tendency to create new patterns)
MLM = (0.9 × 0.7 × 0.8) × e^(2×0.5) ≈ 1.37
Pure Randomness Break Attempt:
- Wᵉ = 0.3 (low weight because randomness often has hidden patterns)
- Lᵉ = 0.4 (weak break since randomness follows statistical patterns)
- Iᵉ = 0.5 (moderate interaction with framework)
- n = 1 (creates first-order patterns)
- α = 0.8 (high tendency to reveal statistical patterns)
MLM = (0.3 × 0.4 × 0.5) × e^(1×0.8) ≈ 0.18
Gödel’s Incompleteness Break Attempt:
- Wᵉ = 0.95 (very high weight as it’s a mathematical proof)
- Lᵉ = 0.8 (strong break of systematic completeness)
- Iᵉ = 0.9 (affects entire logical framework)
- n = 3 (creates multiple levels of meta-patterns)
- α = 0.3 (lower tendency to create new patterns due to logical rigor)
MLM = (0.95 × 0.8 × 0.9) × e^(3×0.3) ≈ 1.84
What this quantification reveals:
- Higher MLM values (like Gödel’s attempt) suggest more rigorous breaks
- The exponential term often dominates, showing how break attempts create new patterns
- Even our strongest breaks (quantum, Gödel) still generate significant pattern values
“The paradox of innovation isn’t in breaking patterns, but in realizing that every escape attempt creates new patterns. True creativity isn’t about leaving the box — it’s about understanding how our attempts to leave it make it grow.”
Adding the time dimension (t) to the exponential term makes this even more fascinating:
MLM = Σ(Wᵉ ⋅ Lᵉ ⋅ Iᵉ) ⋅ eⁿαᵗ
The Time Effect:
- Now pattern creation compounds over time
- Early pattern breaks have more time to generate new patterns
- Small changes can have massive long-term effects due to eⁿαᵗ
Taking Gödel’s Example Again:
- Wᵉ = 0.95
- Lᵉ = 0.8
- Iᵉ = 0.9
- n = 3
- α = 0.3
- t = 100 years since publication
This would make the exponential term much larger due to the time factor, showing how this pattern-break attempt has generated increasingly more patterns over the decades in mathematics and logic.
Thinking About Modern Examples:
- Bitcoin trying to “break” traditional finance:
- Shorter t means less pattern accumulation so far
- But high n and α suggest rapid pattern generation
- The patterns are still emerging and multiplying
Implications:
- Pattern breaks don’t just create new patterns
- They create pattern-generating systems that evolve over time
- The longer a pattern break exists, the more patterns it spawns
- This might explain why ancient pattern breaks (like in philosophy or mathematics) have generated such rich pattern ecosystems
Let’s analyze the emergence of social media influence as a value system:
Early Stage (2008–2012):
- Wᵉ = 0.3 (limited initial impact)
- Lᵉ = 0.4 (affecting mainly young demographics)
- Iᵉ = 0.6 (moderate interaction with traditional marketing)
- Small t, but high α (rapid adoption rate)
Current Stage (2024):
- Wᵉ = 0.8 (major economic/social impact)
- Lᵉ = 0.9 (affects multiple societal layers)
- Iᵉ = 0.95 (deeply integrated with commerce, politics, culture)
- Larger t, sustained high α
The formula could help identify emerging value systems by:
- Tracking rapid increases in interaction terms (Iᵉ)
- Monitoring when weight (Wᵉ) begins to affect multiple layers
- Observing exponential growth patterns in adoption/impact
// Let's explore how small micro-level changes compound into emergence
// We'll simulate multiple small pattern breaks interacting over time
function calculateEmergence(microPatterns, t) {
const n = 2; // Second-order effects
const alpha = 0.01; // Very small growth rate
// Calculate base pattern without interactions
let noInteraction = microPatterns.reduce((sum, p) => sum + p, 0) * Math.exp(n * alpha * t);
// Calculate with interactions (using product to model multiplicative effects)
let withInteraction = microPatterns.reduce((prod, p) => prod * (1 + p), 1) * Math.exp(n * alpha * t);
return {
linear: noInteraction,
emergent: withInteraction
};
}
// Test with very small pattern breaks
const microBreaks = [0.001, 0.002, 0.003, 0.001, 0.002]; // Tiny pattern disruptions
console.log("Short term (t=10):");
console.log(calculateEmergence(microBreaks, 10));
console.log("\nMedium term (t=100):");
console.log(calculateEmergence(microBreaks, 100));
console.log("\nLong term (t=1000):");
console.log(calculateEmergence(microBreaks, 1000));
// Now let's test with slightly different initial conditions
const microBreaks2 = microBreaks.map(x => x * 1.001); // 0.1% difference
console.log("\nButterfly effect test (t=1000):");
const result1 = calculateEmergence(microBreaks, 1000);
const result2 = calculateEmergence(microBreaks2, 1000);
console.log("Original:", result1.emergent);
console.log("Tiny change:", result2.emergent);
console.log("Difference ratio:", result2.emergent / result1.emergent);
The core of this code is the calculateEmergence
function that models how small changes can compound over time in two different ways:
The function takes two parameters:
microPatterns
: An array of small numbers representing tiny pattern disruptionst
: Time period to simulate
Inside the function, there are two key constants:
n = 2
: Represents second-order effectsalpha = 0.01
: A very small growth rate
The function calculates two different scenarios:
- Linear/Non-interactive growth (
noInteraction
):
let noInteraction = microPatterns.reduce((sum, p) => sum + p, 0) * Math.exp(n * alpha * t);
This simply adds up all the patterns and applies exponential growth over time.
- Emergent/Interactive growth (
withInteraction
):
let withInteraction = microPatterns.reduce((prod, p) => prod * (1 + p), 1) * Math.exp(n * alpha * t);
This multiplies the patterns together (allowing them to interact) before applying growth.
The testing part of the code:
- Creates an array of very small numbers:
[0.001, 0.002, 0.003, 0.001, 0.002]
- Tests the system at different time scales (t=10, 100, 1000)
- Creates a second array with just a 0.1% difference to test the butterfly effect
The key insight is that in the interactive/emergent model, these tiny differences can compound dramatically over time, while in the linear model they remain proportional. This demonstrates how small changes in initial conditions can lead to large differences in complex systems over time (the butterfly effect).
The code illustrates how emergent behavior arises from the interaction between components (multiplication) versus simple addition of independent effects, showing how micro-level changes can lead to macro-level effects that are much larger than the sum of their parts.
Episode 4, and the season finale of Season 1 coming soon.
Cole
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI