Inside Chain of Code: Google DeepMind Method that can Reason in Code
Last Updated on December 30, 2023 by Editorial Team
Author(s): Jesus Rodriguez
Originally published on Towards AI.
I recently started an AI-focused educational newsletter, that already has over 160,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:
TheSequence U+007C Jesus Rodriguez U+007C Substack
The best source to stay up-to-date with the developments in the machine learning, artificial intelligence, and dataβ¦
thesequence.substack.com
Writing code is a clear expression of reasoning. A typical program includes blocks that express control-flow, logical expressions and modular structures that combine to model a solution to a given problem. Could code be used to improve LLM reasoning? Google DeepMind recently published a paper proposing Chain-of-code(CoC), a reasoning method for foundation model based on code generation.
The inspiration for CoC comes from some of the limitations of chain-of-thought(CoT) prompting when it comes to dealing with arithmetic tasks. CoC tries to mitigate that by creating models that can βreason in code formβ. CoC operates by allowing the LLM to not only write code but also to mimic the behavior of an interpreter. This is done by generating anticipated outputs for specific code segments that an actual interpreter might struggle to execute. The central concept behind CoC is to enable LLMs to break down complex semantic tasks within a program into manageable pseudocode segments. These segments are then processed in real-time, a process referred to as LMulator, a blend of the terms LM and emulator.
For instance, consider a task where the LM is asked to determine the frequency of sarcasm in a given paragraph. CoC enables the LM to construct a program that might utilize functions like is_sarcastic(sentence). Here, the LM predicts and returns a boolean result based on linguistic analysis, which is then integrated into the broader program structure. The process CoC follows is straightforward: the LM writes the code, and as the interpreter attempts to execute each line (marked in red), it either succeeds or, in case of failure, the LM steps in to simulate the result (indicated in purple) and updates the program state (shown in green). This dual approach of CoC, where it combines writing executable code for precise algorithmic computations and creating pseudocode for semantic challenges, not only simplifies the programming process but also enables LMs to process and βthinkβ in code more effectively. The following figure illustrates that concept:
The Architecture
Chain of Code(CoC) is based on three pivotal methods in LLM reasoning: Chain of Thought (CoT), ScratchPad, and Program of Thoughts(PoT). These approaches have significantly enhanced the capability of language models to decompose complex problems into manageable substeps. The CoT approach utilizes natural language to break down problems, reflecting how one might methodically work through a challenging issue. ScratchPad, in contrast, keeps track of intermediate steps like a code interpreter, aiding in the simulation of code output. The third approach, PoT, concentrates on the creation of code that is executed by a code interpreter for problem-solving.
DeepMindβs CoC methodology is inspired by human problem-solving techniques, where a blend of natural language, pseudocode, and code execution is employed. This method consists of two phases: Generation and Execution. During Generation, in response to a given problem, the language model formulates code for reasoning. In the Execution phase, this code is either executed through a code interpreter or simulated by a language model if direct execution isnβt feasible.
CoC Generation involves the creation of code that structures the reasoning process for solving problems. This code can vary from explicit code to pseudocode or natural language. An example of this can be seen in how DeepMind tackled an object counting problem from BIG-Bench.
A critical aspect of CoC is its Execution process. Here, once the reasoning code is developed, itβs first attempted to be run by a code interpreter. If the code runs successfully, the process continues with the updated program state. However, if the code fails to execute or encounters errors, the language model steps in to simulate the execution, updating the program state based on the modelβs outputs. This approach, termed as an βLMulatorβ (a blend of language model and code emulator), opens new avenues for code applications, especially those involving a mix of semantics and numerics. This method is exemplified in how the generated code is executed, maintaining the program state and alternating between a Python executor and the LMulator.
The Results
Google DeepMind evaluated CoC across different benchmarks. CoC seems to regularly outperform other methods in the number of tasks that surpass the human baseline. The following matrix illustrates the performance in few-shot prompting tasks for both single-task and cross-task scenarios.
If strong benchmarks such as BIG-Bench-Hard(BBH), CoC was able to perform incredibly well across complex tasks that involve different forms of reasoning:
Reasoning remains one of the most important areas to unlock the potential of LLMs. Coding is one of the clearest forms of reasoning, and DeepMindβs CoC is one of the first models that combine these two areas for general-purpose LLM scenarios.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI