Original Paper: https://arxiv.org/abs/2210.00720
By: Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot
Abstract:
We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences describing intermediate reasoning steps towards a final answer, large language models can generate new reasoning chains and predict answers for new inputs. A central question is which reasoning examples make the most effective prompts. In this work, we propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning. We show that prompts with higher reasoning complexity, i.e., chains with more reasoning steps, achieve substantially better performance on multi-step reasoning tasks over strong baselines. We further extend our complexity-based criteria from prompting (selecting inputs) to decoding (selecting outputs), where we sample multiple reasoning chains from the model, then choose the majority of generated answers from complex reasoning chains (over simple chains). When used to prompt GPT-3 and Codex, our approach substantially improves multi-step reasoning accuracy and achieves new state-of-the-art (SOTA) performance on three math benchmarks (GSM8K, MultiArith, and MathQA) and two BigBenchHard tasks (Date Understanding and Penguins), with an average +5.3 and up to +18 accuracy improvements. Compared with existing example selection schemes like manual tuning or retrieval-based selection, selection based on reasoning complexity is intuitive, easy to implement, and annotation-efficient. Further results demonstrate the robustness of performance gains from complex prompts under format perturbation and distribution shift.
Summary Notes
Figure: A: Chain of thoughts (in blue) are intermediate reasoning steps towards a final answer. The input of CoT prompting is a stack of few (often 8) CoT cases before a test question. Then the language model will continue generating an output CoT for the test question. B: Chains of harderreasoning complexity are chains with more reasoning steps (9 steps in this case, v.s. only 2 steps in subfigure A). C: During decoding, we sample N reasoning chains from the language model (N = 5 here), and take the majority answer over the K (K = 3 here) most complex generated chains.
Introduction
Imagine a world where AI can solve complex math problems, understand intricate logical puzzles, and even reason through multifaceted scenarios—all by leveraging the power of large language models. Recent advancements in AI have brought us closer to this reality, and one of the most promising techniques to emerge is complexity-based prompting. This blog post delves into a groundbreaking research paper from the University of Edinburgh and the Allen Institute for AI, which introduces an innovative approach to enhancing multi-step reasoning in large language models like GPT-3 and Codex.
The Core Hypothesis: Complexity Matters
The central question posed by the researchers is: Which reasoning examples make the most effective prompts for multi-step reasoning tasks? The hypothesis is that complex reasoning examples, which involve more steps, can significantly improve the performance of large language models in multi-step reasoning tasks.
Methodologies: The Nuts and Bolts
Complexity-Based Prompting
The researchers propose a new example selection scheme called complexity-based prompting. Traditional approaches often rely on manual selection or heuristic rules to choose examples for prompting. In contrast, complexity-based prompting selects examples based on the number of reasoning steps involved. This is intuitive as more complex examples can subsume simpler instances, providing a richer context for the model.
Complexity-Based Consistency
To further enhance the performance, the researchers extend the complexity-based criteria to the decoding phase. Instead of using greedy decoding or a simple majority vote among all generated reasoning chains, they propose complexity-based consistency. Here, multiple reasoning chains are sampled, and the majority vote is taken from the top K most complex chains. This approach leverages the model's ability to generate multiple possible solutions and selects the most robust ones.
Key Findings: A Leap in Performance
Benchmarking Success
The research demonstrates that complexity-based prompting and consistency achieve state-of-the-art performance on several math benchmarks, including GSM8K, MultiArith, and MathQA, as well as on BigBenchHard tasks like Date Understanding and Penguins. The improvements are substantial, with an average performance gain of 6.2% on Codex and 5.3% on GPT-3 compared to traditional methods.
Robustness and Generalization
One of the most compelling aspects of this research is the robustness of the performance gains. The improvements hold across different prompt distributions, including in-distribution, noisy-labeled, and out-of-distribution scenarios. This robustness makes complexity-based prompting a versatile tool for various applications.
Real-World Implications
Enhanced Problem Solving
In practical terms, this research opens up new possibilities for AI applications in fields requiring complex problem-solving abilities. From advanced tutoring systems that can solve intricate math problems to AI assistants capable of reasoning through multi-step logical puzzles, the potential applications are vast.
Reduced Annotation Efforts
Another significant advantage is the reduction in annotation efforts. Traditional methods often require extensive manual tuning or full dataset annotations. In contrast, complexity-based prompting can achieve better performance with minimal annotation, making it a more efficient and scalable solution.
Conclusion: A New Era of AI Reasoning
This research marks a significant milestone in the journey toward more capable and intelligent AI systems. By leveraging complexity-based prompting and consistency, large language models can achieve unprecedented levels of performance in multi-step reasoning tasks. As we move forward, this approach could become a cornerstone in the development of AI technologies, pushing the boundaries of what these models can achieve.
Final Thoughts
The next time you encounter a complex problem, remember that in the world of AI, complexity isn't a hurdle—it's an opportunity. And with techniques like complexity-based prompting, we're better equipped than ever to seize that opportunity.
Athina AI is a collaborative IDE for AI development.
Learn more about how Athina can help your team ship AI 10x faster →