
From ‘Chain of Thought’ to ‘Graph of Thought’: A Tour of How GenAI Got So Smart
Author(s):
Originally published on Towards AI.

Have you ever messed around with an AI and wondered what’s really going on under the hood? It can spit out a poem, a recipe, or a summary in seconds, which is amazing. But when you ask it a tricky question, you can almost feel the gears grinding. How does it go from just predicting the next word to actually reasoning its way through a problem?
It turns out, that’s the billion-dollar question! The journey from a simple text-completer to a sophisticated problem-solver is a wild ride. We’ve gone from teaching AI to “show its work” to having it build entire branching maps of possibilities. Let’s untangle this web and see how these digital brains are being taught to think.

The First Big Trick: Teaching AI to “Think Out Loud” with Chain-of-Thought (CoT)
At their core, Large Language Models (LLMs) are basically autocomplete on steroids. They’ve read a bajillion pages of the internet and gotten incredibly good at guessing the next word in a sentence. This is great for making text that sounds human, but it’s a terrible way to solve a math problem. Early models might get a question right but then fail completely if you just changed the numbers, proving they were just matching patterns, not understanding the logic.
This is where Chain-of-Thought (CoT) prompting came in and changed the game. It’s a ridiculously simple idea that works wonders: you just ask the model to explain its reasoning step-by-step before giving the answer. It’s the AI equivalent of your old math teacher saying, “Show your work!” .
Suddenly, models could solve grade-school math problems that stumped them before, with success rates jumping from 18% to 58% in some tests!

There are a couple of flavors of CoT:
- Few-Shot CoT: This is like giving the AI a little cheat sheet. You show it a few examples of a question with the step-by-step thinking included, and then you give it the real problem. It learns the pattern of thinking from the examples you provide. The downside? You have to write all those examples by hand.
- Zero-Shot CoT: This discovery was a huge “Aha!” moment. Researchers found you could trigger the same step-by-step reasoning just by adding a simple magic phrase like, “Let’s think step-by-step” to the end of your question. No examples needed! The AI, having seen this phrase in its training data, knows it’s the cue to start explaining itself.
But CoT has a fatal flaw: its thinking process is a one-way street. It’s like a train on a single track. If it makes a mistake in step one, there’s no going back. That error gets locked in and every single step after that will be corrupted, a problem known as error propagation.
Giving the AI Tools: ReAct Grounds Thinking in Reality
So, CoT taught the AI to have an internal monologue, but that monologue was happening in a locked room. The AI had no way to check if its “thoughts” were actually true, leading to what we call fact hallucination — where the model just makes up plausible-sounding nonsense to finish the job.
The solution? Give the AI some tools and let it out of the room! This is the genius behind the ReAct (Reason + Act) framework.

ReAct creates a simple but powerful loop:
Thought -> Action -> Observation.
- Thought: The AI thinks about what it needs to do. For example, if asked “Who was the U.S. president during the first moon landing?”, the initial thought is : “To answer this, I first need to know the date of the moon landing”.
- Action: Based on that thought, it performs an action, like
Search[first moon landing date]
. - Observation: It gets the result of its action back from the tool (e.g., “The first moon landing was July 20, 1969”) and adds this new fact to its thinking process.
This is how humans work! We think, we do something, we see what happens, and then we think again. By giving the AI a search engine or a calculator, ReAct grounds its reasoning in the real world. Instead of just inventing a date, it can go look it up, which dramatically reduces hallucinations and makes the whole process more trustworthy.
Exploring Every Possibility with Tree-of-Thought (ToT)
Okay, so ReAct gave our AI hands and eyes, but it was still following a single path. What if that path leads to a dead end?
To solve truly complex problems, you can’t just follow the first idea you have; you need to explore multiple possibilities.
Enter Tree-of-Thought (ToT), a framework that upgrades CoT’s single-track mind into a full-blown branching tree of possibilities. Think of it like a detective who, instead of following just one lead, explores multiple leads in parallel.
ToT works through four key steps:

- Decomposition: Break the problem into smaller, manageable thoughts or steps.
- Generation: For each step, brainstorm several different ways to proceed. This is where the tree “branches out.”
- Evaluation: Here’s the cool part — the AI acts as its own critic! It looks at all the branches it just created and evaluates how promising each one is, giving them a score like “sure,” “likely,” or “impossible”.
- Search: An algorithm then decides which branches to explore further based on their scores, and which ones to prune (cut off). If a path looks like a dead end, the AI can backtrack to an earlier point and try a different, more promising branch.
This ability to explore, self-evaluate, and backtrack is a superpower. On puzzles like the Game of 24 (using four numbers to make 24), CoT barely lands a 4% success rate, while ToT nails it 74% of the time! It shows the incredible power of structured exploration for solving complex problems.
The Ultimate Brainstorm: Graph-of-Thought (GoT)
Even a tree has its limits. It’s a strict hierarchy — you can’t just take an idea from one branch and merge it with a completely different one. But human creativity is often about connecting disparate ideas to create something new.
This is the frontier where Graph-of-Thought (GoT) comes in. It’s the most flexible framework yet, modeling the AI’s reasoning process not as a line or a tree, but as an interconnected web or graph. A social network is a graph; a family tree is a tree. GoT gives the AI a social network for its thoughts.
This unlocks two game-changing abilities:
- Aggregation: This is the killer feature. GoT allows the AI to merge multiple different lines of reasoning into a single, new idea. Imagine the AI writing a report by summarizing different sections in parallel and then combining them into a final, coherent document. It can take the best parts of several ideas and synthesize them.
- Refinement: GoT can create loops. This means the AI can pass a thought back to itself for improvement, essentially editing and refining its own work over and over until it’s just right.
This makes GoT the most powerful and flexible structure for thinking. In fact, frameworks like
LangGraph is now emerging as toolkits that let developers build these incredibly sophisticated “graph brains” for real-world AI agents.
So, Which “Brain” Do You Use?
Choosing the right reasoning technique is all about matching the tool to the job, a principle we can call “Problem-Framework Fit”.

Here’s a simple cheat sheet:

The journey from a simple chain to a flexible graph shows just how fast this field is moving. We’re no longer just “prompting” an AI; we’re designing its entire cognitive architecture. By building systems that can break down problems, check facts, explore possibilities, and even synthesize new ideas, we’re paving the way for AI that is not only more capable but also more reliable and transparent.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI