First-Principles Statistics for Cognitive Bias
Author(s): Shenggang Li Originally published on Towards AI. A practical, model-based way to stop getting fooled by “simple health rules” online Why do “one simple habit” posts feel so convincing? Photo by Teena Lalawat on UnsplashThis article explores the pitfalls of oversimplified …
The Future of Enterprise AGI: Why “Hybrid Brains” Will Win
Author(s): Shenggang Li Originally published on Towards AI. How companies will mix external frontier models with private in-house brains — without sacrificing control Picture this. Photo by Adrien Converse on UnsplashThe article discusses the future of enterprise-level Artificial General Intelligence (AGI), introducing …
Zero-Sum vs Positive-Sum
Author(s): Shenggang Li Originally published on Towards AI. How Two-Agent Reinforcement Learning Explains Office Politics, Status Wars, and the One Move That Turns Conflict into Compounding On Monday morning, two people walk into the same meeting with the same goal: “make progress”. …
The Two Things Every Reliable Agent Needs
Author(s): Shenggang Li Originally published on Towards AI. Memory-first design + anti-Goodhart scoreboards for systems that don’t optimize proxies. Let me guess how your “agent” demo went. Photo by zero take on UnsplashThis article discusses the essential components for creating reliable AI …
Copilot vs. “Private AGI”: When Human–LLM Collaboration Is Enough (and When It Isn’t)
Author(s): Shenggang Li Originally published on Towards AI. A practical framework — with data, a little math, and field-tested workflows — for experts deciding between interactive LLM work and autonomous agent/AGI-style systems. A quiet confusion sits under most “AI at work” debates: …
RAG Isn’t AGI
Author(s): Shenggang Li Originally published on Towards AI. Why “LLM + Retrieval + Private Data” collapses outside the demo — and what real AGI-like systems actually require Over the last two years, a new corporate myth has spread faster than any technical …
Why Humans Are Not Reinforcement Learning Agents — And Why This Matters for AI
Author(s): Shenggang Li Originally published on Towards AI. Reward instability, shifting perspectives, and the hidden limits of classical reinforcement learning Modern AI systems rely heavily on attention. It allows models to focus, reason over context, and scale to massive inputs. Photo by …
The AI Black Hole: Why the Bubble Won’t Burst
Author(s): Shenggang Li Originally published on Towards AI. Rethinking the “AI Hype” through Economics, Reinforcement Learning, and Game Theory Every few decades, a new technology arrives that captures both our imagination and our capital.In the late 1990s, it was the internet. In …
The Multiplication Law of Wealth: From Compound Interest Mathematics to the Reinforcement Learning Essence of Human Behavior
Author(s): Shenggang Li Originally published on Towards AI. The Multiplication Law of Wealth: From Compound Interest Mathematics to the Reinforcement Learning Essence of Human Behavior Wealth Inequality Isn’t an Accident — It Follows Natural Laws Photo by Buddha Elemental 3D on UnsplashThis …
The Future of Innovation: How AI Turns Intuition and Cross-Domain Thinking into Scientific Discovery
Author(s): Shenggang Li Originally published on Towards AI. The Future of Innovation: How AI Turns Intuition and Cross-Domain Thinking into Scientific Discovery In every age, people have redefined what it means to create.The Industrial Revolution turned physical work into something machines could …
Stochastic Pathways of Long-Term Investing: A Control, Learning, and Search Perspective
Author(s): Shenggang Li Originally published on Towards AI. Recasting Incremental Portfolio Strategies through Reinforcement Learning, Inverse Reward Inference, and Monte Carlo Search Investing has always been marked by fragmentation. Decisions are often made in pieces — one stock today, another tomorrow — …
Comparing Four Time Series Forecasting Methods: Prophet, DeepAR, TFP-STS, and Adaptive AR
Author(s): Shenggang Li Originally published on Towards AI. A practical evaluation of models from Meta, Amazon, Google, and a new adaptive AR approach Time series forecasting is everywhere — in business, finance, retail, and even public policy. The challenge is simple to …
When the Fed Raises Rates: Why Markets Sometimes Cheer and Sometimes Panic
Author(s): Shenggang Li Originally published on Towards AI. Exploring the Nonlinear Dance Between Monetary Policy, Market Narratives, and AI-Powered Learning Models Every time the Federal Reserve announces a rate hike, investors hold their breath. Will stocks plunge because borrowing costs rise and …
Proximal Policy Optimization in Action: Real-Time Pricing with Trust-Region Learning
Author(s): Shenggang Li Originally published on Towards AI. A Practical Guide to Actor–Critic Methods for Dynamic, Data-Driven Decisions Every time a customer opens an app or website, the platform must set a surcharge in milliseconds to balance rider supply, demand spikes, and …
Hybrid Model-Based RL for Intelligent Marketing: Dyna-Q Meets Transformer Models and Bayesian Survival Priors
Author(s): Shenggang Li Originally published on Towards AI. A theory-to-practice study on profit-driven customer re-engagement in e-commerce using BG/NBD-augmented Attention and budget-aware roll-outs We built a next-gen coupon engine fusing three techniques: a Bayesian survival model for repurchase chance, an attention-based Transformer …