Chain-of-Thought (CoT) Prompting Explained: 7 Techniques for Optimizing AI Performance
Introduction
By now, you’ve surely heard about Chain-of-Thought prompting.
The technique was popularized by the 2022 paper Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, which found that asking an LLM to “think step-by-step” elicits much better reasoning abilities.
This technique is particularly effective at problems that require complex reasoning or multi-step thought processes (ex: math questions). It has also been the basis for OpenAI’s latest suite of o1 models, which are trained to use CoT reasoning when responding.
But did you know that there are many types of chain-of-thought prompting? In this post, we’ll break down 7 different types of chain-of-thought prompting, and explain what each does, how and why it works, and which you should use for your use case.
Constrained Chain-of-thought
CCoT introduces a constraint on the length of the model's responses, typically limiting the number of words or tokens used in the reasoning process.
While chain-of-thought is great, it can be slow and a token hog.
Constrained CoT aims to get the best of both worlds, streamlining and compressing the model's output to improve response time and cost, while also getting the benefits of Chain-of-Thought reasoning.
Here’s a sample prompt
Prompt: Think step-by-step and limit your explanation to 100 words
Contrastive CoT
This method includes both correct and incorrect examples within the prompt to illustrate faulty reasoning alongside correct logic.
It aims to teach the model by contrasting sound reasoning with errors, enhancing its understanding of logical processes.
Least-to-most prompting
Least-to-most prompting works by breaking down a complex problem into a series of simpler subproblems, and then solving them in sequence.
Each subproblem is facilitated by the answers to the previous subproblems.
For example, to solve a math word problem, we might first query the language model to decompose the problem into subproblems, such as
"What is the cost of the first item?" and "What is the total cost?"
We would then query the language model to sequentially solve the subproblems, using the answers to the previous subproblems to inform our queries.
Automatic Chain-of-Thought (Auto-CoT)
Auto-CoT automates the generation of reasoning chains by leveraging LLMs to create diverse examples without manual input.
This approach involves clustering questions and sampling representative queries to generate reasoning chains, thus reducing the risk of suboptimal demonstrations
Auto-CoT is a way to automatically create demonstrations with questions and reasoning chains. It uses LLMs to generate reasoning chains for each demonstration, using the prompt "Let's think step by step."
Auto-CoT has two main steps:
- First, it partitions the questions in a given dataset into a few clusters.
- Then, it selects a representative question from each group and uses Zero-Shot-CoT with simple heuristics to generate a reasoning chain.
The diversity of the demonstration questions is important for reducing the number of mistakes that Zero-Shot-CoT makes in the reasoning chain.
By clustering the questions into a few groups, Auto-CoT can ensure that each demonstration is representative of a different type of question.
This helps to reduce the chances that Zero-Shot-CoT will make mistakes in the reasoning chain.
Tabular CoT
Tabular Chain-of-Thought (Tabular CoT) is a specialized approach within the broader framework of chain-of-thought prompting, tailored specifically for reasoning tasks involving tabular data.
This method enhances the reasoning capabilities of LLMs by structuring their outputs in a tabular format, which facilitates clearer and more organized reasoning processes.
Copy
Think step-by-step, and answer the following:
{{query}}
** | step | explanation | output **
Faithful CoT
Faithful Chain-of-Thought (CoT) prompting aims to ensure that the LLM’s reasoning processes accurately reflect their internal computations when arriving at answers.
The primary objective of Faithful CoT prompting is to address the issue of faithfulness in the outputs generated by LLMs.
Traditional chain-of-thought methods may produce reasoning steps that do not align with how the model computes answers, which can lead to inaccuracies or misleading conclusions.
Faithful CoT seeks to bridge this gap by ensuring that the reasoning provided is not only coherent but also representative of the model's actual decision-making process
Tree-of-thoughts
Tree of thoughts allows models to explore multiple reasoning paths simultaneously, facilitating a more comprehensive exploration of potential solutions.
Tree of Thoughts prompting is inspired by human cognitive processes, particularly the way individuals approach complex problems through trial and error.
Unlike traditional Chain of Thought (CoT) prompting, which follows a linear progression, ToT prompting enables the model to generate a branching structure of thoughts or ideas, resembling a decision tree.
This structure allows for parallel exploration of different paths and backtracking when necessary, mimicking the human problem-solving approach.
Sample prompt prefix:
Copy
Think step-by-step. Imagine 3 different experts are answering this question...
Comparison of all the CoT techniqu
Technique Name | One-Line Description | Explanation | Sample Prompt | When to Use This Technique | Link to Original Paper |
Chain of Thought (CoT) | Guides models through reasoning steps. | CoT prompting enables complex reasoning by asking models to articulate their thought processes in a step-by-step manner, improving performance on logic and calculation tasks. | "Think step by step…" | Use when tasks require detailed explanations or logical reasoning. | |
Tree of Thoughts (ToT) | Maps out possible reasoning paths. | ToT allows models to explore multiple paths of reasoning, self-calibrating if they deviate from the correct path, and effectively managing complex decision-making scenarios. | Imagine 3 different experts answering this question... | Ideal for problems with multiple potential solutions or paths. | |
Automatic Chain of Thought (Auto-CoT) | Automates the generation of reasoning chains. | Auto-CoT reduces manual effort by clustering questions and automatically generating diverse reasoning chains, enhancing the quality of demonstrations without extensive human input. | Use when needing diverse examples quickly without manual crafting. | ||
Contrastive Chain of Thought | Includes both good and bad examples | This method includes both correct and incorrect examples within the prompt to illustrate faulty reasoning alongside correct logic. | Best for evaluating options or hypotheses against one another. | ||
Faithful Chain of Thought | Ensures accuracy in reasoning outputs. | This technique aims to maintain fidelity between the model's reasoning steps and its final output, ensuring that the conclusions drawn are consistent with the articulated thought process. | Use when accuracy and consistency between steps and conclusions are critical. | ||
Tabular Chain of Thought | Organizes reasoning in a structured table format. | This method structures the model's reasoning in a tabular format, allowing for clearer comparisons and organization of information, which can enhance clarity in complex scenarios. | Think step-by-step. Explain your reasoning in the following structure:
** | step | explanation | output ** | Ideal for tasks requiring structured data presentation or comparison across multiple variables. | |
Least to Most Prompting | Breaks down complex problems into simple subproblems and solves them sequentially | This method works by breaking down a complex problem into a series of simpler subproblems, and then solving them in sequence. Each subproblem is facilitated by the answers to the previous subproblems. | Useful for educational contexts where gradual learning is essential or when introducing new concepts. | ||
Constrained Chain of Thought (CCoT) | Limits response length while maintaining logical flow. | CCoT enhances CoT by constraining the length of reasoning outputs, encouraging concise articulation of thoughts while still guiding models through a structured thought process for improved clarity and efficiency. | "Think step by step and limit your explanation to 100 words…" | Best used when concise answers are needed without sacrificing logical coherence, especially in complex queries. |