A Neuro-Symbolic Architecture for Industrial Cognition
Last Updated on December 9, 2025 by Editorial Team
Author(s): Carlos Eduardo Favini
Originally published on Towards AI.
By Carlos Eduardo Favini

1. The Semantic Ceiling: Why Industrial AI Stalls
After two decades of investment, roughly 70% of digital transformation initiatives fail to scale beyond pilot stages (McKinsey, 2023). Industry 4.0 delivered connectivity — sensors, networks, data lakes — but not cognition. The result: dashboards that monitor but don’t decide, models that predict but don’t understand, and automation that breaks when context shifts.
The core problem is architectural. Current systems process data types — predefined categories like “image,” “text,” or “sensor reading.” But operational reality doesn’t arrive in neat categories. A technical drawing encodes spatial intention. A gesture encodes operational command. A vibration pattern encodes mechanical state. These aren’t “data types” — they are signals carrying semantic potential.
To move from connectivity to cognition, we need an architecture that can perceive signals regardless of format — including formats that don’t yet exist. It must extract intention from structure, not just pattern from data. It must evaluate decisions through multiple cognitive lenses simultaneously. And it must learn and evolve operational knowledge over time.
This article presents such an architecture — a neuro-symbolic framework that bridges the gap between raw signals and intelligent action.
2. The Sensory Cortex: Carrier-Agnostic Perception
The first innovation is a perception layer that separates what carries a signal from what the signal means. We call this the Sensory Cortex.
Traditional systems ask: “What data type is this?” The Sensory Cortex asks: “Is there structure here? And if so, does that structure carry intention?”
This reframing enables processing of signals that weren’t anticipated at design time — a critical capability for industrial environments where new sensor types, protocols, and formats emerge continuously.
The Abstraction Hierarchy
The Sensory Cortex operates through five levels of abstraction:
Level 0 — Carrier: The physical or digital substrate transporting the signal. Electromagnetic (light, radio), mechanical (vibration, pressure), chemical (molecular), digital (bits), or unknown.
Level 1 — Pattern: Detectable regularities within the carrier. Spatial structures (2D, 3D, nD), temporal sequences (rhythm, frequency), relational networks (graphs, hierarchies), or hybrid combinations.
Level 2 — Structure: Non-random organization suggesting information content. Repetition, symmetry, compressibility — entropy below noise threshold indicating that something meaningful exists.
Level 2.5 — Proto-Agency: The critical bridge between structure and meaning. Does the structure suggest encoded agenda? This is not meaning itself, but the suspicion that meaning exists. Indicators include functional asymmetry (purposeful interruption of symmetry), oriented compression (patterns that “point” toward something), transform invariants (persistence across carrier changes), and apparent cost (structure too expensive to arise by chance).
Level 3 — Semantics: If proto-agency is detected, attempt meaning extraction. The key question is not “what is this?” but “what does this allow to be done?”
The concept of Proto-Agency (Level 2.5) is novel. Traditional systems jump directly from “pattern detected” to “meaning assigned.” The Sensory Cortex introduces an intermediate step: detecting the suspicion of intention before attempting interpretation. This prevents false semantic attribution to random structure while enabling recognition of genuinely purposeful signals.

Implementation: The SensoryCortex Class
from dataclasses import dataclass
from enum import Enum
from typing import Optional, Dict, Any
import numpy as np
class CarrierType(Enum):
ELECTROMAGNETIC = "electromagnetic"
MECHANICAL = "mechanical"
DIGITAL = "digital"
UNKNOWN = "unknown"
@dataclass
class PerceptionResult:
carrier: CarrierType
pattern_type: str
structure_score: float # [0,1] non-randomness
proto_agency_score: float # [0,1] suspicion of intention
semantic_potential: Optional[Dict[str, Any]] = None
class SensoryCortex:
"""Carrier-agnostic perception layer."""
def perceive(
self,
signal: bytes,
metadata: Dict = None) -> PerceptionResult:
carrier = self._detect_carrier(signal, metadata)
pattern = self._extract_pattern(signal, carrier)
structure_score = self._analyze_structure(pattern)
proto_agency = self._detect_proto_agency(pattern, structure_score)
semantics = None
if proto_agency > 0.6: # Threshold for semantic extraction
semantics = self._extract_semantics(pattern, carrier)
return PerceptionResult(
carrier,
pattern.type,
structure_score,
proto_agency,
semantics)
3. The Cognitive Core: Four Parallel Motors
Once signals are perceived and semantics extracted, decisions must be made. Traditional systems evaluate decisions sequentially: safety check → governance check → inference → selection. This creates bottlenecks and loses critical information about why a decision is good or bad from different perspectives.
The Cognitive Core takes a different approach: four specialized “motors” evaluate every input simultaneously, each providing a score from a distinct cognitive lens:
Praxeological Motor: Does this action realize its intention? This motor evaluates means-end coherence, asking whether the proposed action actually achieves the stated goal. It is rooted in the logic of purposeful human action — the science of what works.
Nash Motor: Does this produce equilibrium? In complex systems, multiple stakeholders have competing objectives: production versus safety, short-term efficiency versus long-term maintenance. This motor finds Nash equilibria — stable states where no party can improve their position unilaterally.
Chaotic Motor: Is this robust to perturbation? Small changes can cascade into catastrophic failures. This motor performs sensitivity analysis, identifies strange attractors, and maps failure modes before they manifest.
Meristic Meta-Motor: What patterns exist across scales? Operating simultaneously at micro, meso, and macro levels, this motor detects recurring structures, generates variant hypotheses, and imagines what should exist but doesn’t yet. It proposes but never decides — creativity under containment.
4. Craft Performance: Product, Not Sum
The four motors produce scores in the interval [0,1]. How should these be combined into a single decision metric?
The intuitive approach is weighted averaging: CP = 0.3×P + 0.3×N + 0.2×C + 0.2×M. This approach is fundamentally wrong.
Consider this scenario: the Praxeological score is 0.95 (excellent intent alignment), the Nash score is 0.90 (good equilibrium), the Chaotic score is 0.85 (robust to perturbation), but the Meristic score is 0 (the Meta-Motor detects a fundamental pattern violation that the other motors missed). The weighted average would be approximately 0.68. The system would proceed with what appears to be a “moderately good” decision.
But any motor scoring zero represents a categorical rejection. No amount of excellence in three dimensions compensates for fundamental failure in one.
This is what I call the “yen example”: if you have 1 million yen and I have zero, our “average” wealth of 500,000 yen is a statistical lie. You dine; I starve. The average obscures the reality that one party has nothing.
Therefore, Craft Performance is calculated as a product:
CP = Score_P × Score_N × Score_C × Score_M
This creates an absolute veto property: any single zero collapses the entire score to zero. Excellence requires all motors to agree. There is no compensation, no averaging away of failure.

Implementation: Parallel Motor Evaluation
from concurrent.futures import ThreadPoolExecutor
from functools import reduce
import operator
class CognitiveCore:
def __init__(self):
self.motors = {
'praxeological': PraxeologicalMotor(),
'nash': NashMotor(),
'chaotic': ChaoticMotor(),
'meristic': MeristicMetaMotor()
}
def evaluate(self, intent, context):
# Parallel evaluation — concurrent runs
with ThreadPoolExecutor(max_workers=4) as executor:
# Submit evaluation tasks
futures = {
name: executor.submit(
motor.evaluate,
intent,
context
)
for name, motor in self.motors.items()
}
# Gather results once completed
scores = [
futures[name].result()
for name in self.motors
]
# Craft Performance = PRODUCT (not sum)
# Any zero = total zero (absolute veto)
craft_performance = reduce(
operator.mul,
[s.score for s in scores],
1.0
)
return craft_performance, scores
5. The Operational Genome: Knowledge as Living Structure
The Cognitive Core reasons over a knowledge base we call the Operational Genome. The biological metaphor is intentional but strictly architectural: we use genomic terminology to describe inheritance and composition patterns, not to imply biological processes.
Codon: The atomic unit of operational intention. Structure: [Entity | Action | Target-State]. Example: [Valve-401 | Close | Isolated].
Gene: A sequence of codons forming a complete operational procedure. Contains preconditions, instructions, exceptions, and success criteria.
Genome: The complete library of genes for an operational domain. Not static documentation — a living structure that evolves through use.
Critically, the Genome encodes two distinct types of truth:
Registered Truth (blockchain): Immutable records of what actually happened. Contextual, historical, crystallized. Foucauldian truths — situated and particular.
Synthesized Truth (DNA patterns): Plastic approximations of ideal patterns. Universal, calculated, evolving. Platonic truths — aspirational forms we approach but never reach.
6. The Complete Decision Flow
Bringing together the Sensory Cortex, Cognitive Core, and Operational Genome, the complete architecture forms a closed cognitive loop. Signals from the real world are perceived, evaluated, acted upon, and the outcomes feed back to improve the system’s knowledge.

Implementation: The Closed Cognitive Loop
# Initialize the cognitive system
cortex = SensoryCortex()
core = CognitiveCore()
# Load the specific unit's genome
genome = Genome.load("./assets/refinery/unit-42.json")
# Step 1: Perceive incoming signal
perception = cortex.perceive(incoming_signal, metadata)
# Step 2: Check proto-agency threshold
if perception.proto_agency_score < 0.6:
genome.store_unresolved(perception)
return
# Step 3-4: Match intent to candidate genes
intent = perception.semantic_potential
candidates = genome.match(
intent,
telemetry.current_state()
)
# Step 5: Evaluate through PARALLEL motors
# This creates a list of (gene, score, explanation)
evaluated = [
(gene, *core.evaluate(gene, context))
for gene in candidates
]
# Step 6-7: Select best and execute
# Find the gene with the highest Craft Performance (cp)
best_gene, best_cp, _ = max(
evaluated,
key=lambda x: x[1]
)
if best_cp > 0.5:
outcome = orchestrator.execute(best_gene)
# Register truth (Blockchain)
genome.register_truth(best_gene, outcome)
# Update fitness (Evolution)
genome.update_fitness(best_gene, outcome)
7. Implications for Industry 5.0
Industry 5.0 — as articulated by the European Commission — emphasizes three pillars: human-centricity, sustainability, and resilience. Each requires capabilities that current architectures cannot provide.
Human-centricity requires understanding human expression — gestures, glances, implicit intentions. The Sensory Cortex enables perception of embodied communication by separating carrier from meaning.
Sustainability requires balancing competing objectives across time horizons. The Nash Motor finds equilibria between immediate efficiency and long-term resource preservation.
Resilience requires detecting novel perturbations. The Chaotic Motor identifies sensitivity dependencies; the Meristic Meta-Motor imagines failure modes before they occur.
The architecture presented here is a foundation — a structural substrate upon which industrial cognition can be built. But the core insight stands: structure precedes meaning, and meaning emerges from potential action. Systems that understand this will define the next industrial era.
8. Open Research Questions
Federation: How can operational genomes be shared across organizations while preserving competitive advantage?
Proto-Agency Formalization: What are the mathematical foundations for distinguishing purposeful structure from complex randomness?
Motor Calibration: Is the product function universally appropriate, or are there contexts requiring alternative aggregation?
Safety Governance: What regulatory frameworks ensure that autonomous knowledge evolution improves rather than degrades safety?
These questions define the frontier. The architecture provides a foundation for exploring them.
About the Author: Carlos Eduardo Favini researches neuro-symbolic architectures for industrial cognition. His work spans three decades of operational experience, from offshore platforms to surgical centers. He is the author of “The Digital Genome.” Connect on LinkedIn or explore the framework on GitHub.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.