Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Inside Chain of Code: Google DeepMind Method that can Reason in Code
Artificial Intelligence   Latest   Machine Learning

Inside Chain of Code: Google DeepMind Method that can Reason in Code

Last Updated on December 30, 2023 by Editorial Team

Author(s): Jesus Rodriguez

Originally published on Towards AI.

Created Using DALL-E

I recently started an AI-focused educational newsletter, that already has over 160,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:

TheSequence U+007C Jesus Rodriguez U+007C Substack

The best source to stay up-to-date with the developments in the machine learning, artificial intelligence, and data…

thesequence.substack.com

Writing code is a clear expression of reasoning. A typical program includes blocks that express control-flow, logical expressions and modular structures that combine to model a solution to a given problem. Could code be used to improve LLM reasoning? Google DeepMind recently published a paper proposing Chain-of-code(CoC), a reasoning method for foundation model based on code generation.

The inspiration for CoC comes from some of the limitations of chain-of-thought(CoT) prompting when it comes to dealing with arithmetic tasks. CoC tries to mitigate that by creating models that can β€œreason in code form”. CoC operates by allowing the LLM to not only write code but also to mimic the behavior of an interpreter. This is done by generating anticipated outputs for specific code segments that an actual interpreter might struggle to execute. The central concept behind CoC is to enable LLMs to break down complex semantic tasks within a program into manageable pseudocode segments. These segments are then processed in real-time, a process referred to as LMulator, a blend of the terms LM and emulator.

For instance, consider a task where the LM is asked to determine the frequency of sarcasm in a given paragraph. CoC enables the LM to construct a program that might utilize functions like is_sarcastic(sentence). Here, the LM predicts and returns a boolean result based on linguistic analysis, which is then integrated into the broader program structure. The process CoC follows is straightforward: the LM writes the code, and as the interpreter attempts to execute each line (marked in red), it either succeeds or, in case of failure, the LM steps in to simulate the result (indicated in purple) and updates the program state (shown in green). This dual approach of CoC, where it combines writing executable code for precise algorithmic computations and creating pseudocode for semantic challenges, not only simplifies the programming process but also enables LMs to process and β€˜think’ in code more effectively. The following figure illustrates that concept:

Image Credit: Google DeepMind

The Architecture

Chain of Code(CoC) is based on three pivotal methods in LLM reasoning: Chain of Thought (CoT), ScratchPad, and Program of Thoughts(PoT). These approaches have significantly enhanced the capability of language models to decompose complex problems into manageable substeps. The CoT approach utilizes natural language to break down problems, reflecting how one might methodically work through a challenging issue. ScratchPad, in contrast, keeps track of intermediate steps like a code interpreter, aiding in the simulation of code output. The third approach, PoT, concentrates on the creation of code that is executed by a code interpreter for problem-solving.

Image Credit: Google DeepMind

DeepMind’s CoC methodology is inspired by human problem-solving techniques, where a blend of natural language, pseudocode, and code execution is employed. This method consists of two phases: Generation and Execution. During Generation, in response to a given problem, the language model formulates code for reasoning. In the Execution phase, this code is either executed through a code interpreter or simulated by a language model if direct execution isn’t feasible.

Image Credit: Google DeepMind

CoC Generation involves the creation of code that structures the reasoning process for solving problems. This code can vary from explicit code to pseudocode or natural language. An example of this can be seen in how DeepMind tackled an object counting problem from BIG-Bench.

A critical aspect of CoC is its Execution process. Here, once the reasoning code is developed, it’s first attempted to be run by a code interpreter. If the code runs successfully, the process continues with the updated program state. However, if the code fails to execute or encounters errors, the language model steps in to simulate the execution, updating the program state based on the model’s outputs. This approach, termed as an β€œLMulator” (a blend of language model and code emulator), opens new avenues for code applications, especially those involving a mix of semantics and numerics. This method is exemplified in how the generated code is executed, maintaining the program state and alternating between a Python executor and the LMulator.

The Results

Google DeepMind evaluated CoC across different benchmarks. CoC seems to regularly outperform other methods in the number of tasks that surpass the human baseline. The following matrix illustrates the performance in few-shot prompting tasks for both single-task and cross-task scenarios.

Image Credit: Google DeepMind

If strong benchmarks such as BIG-Bench-Hard(BBH), CoC was able to perform incredibly well across complex tasks that involve different forms of reasoning:

Image Credit: Google DeepMind

Reasoning remains one of the most important areas to unlock the potential of LLMs. Coding is one of the clearest forms of reasoning, and DeepMind’s CoC is one of the first models that combine these two areas for general-purpose LLM scenarios.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓