Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters

Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Photo by fabio / Unsplash


Original Paper: https://arxiv.org/abs/2408.03314

By: Charlie SnellJaehoon LeeKelvin XuAviral Kumar

Abstract:

Enabling LLMs to improve their outputs by using more test-time computation is a critical step towards building generally self-improving agents that can operate on open-ended natural language.

In this paper, we study the scaling of inference-time computation in LLMs, with a focus on answering the question: if an LLM is allowed to use a fixed but non-trivial amount of inference-time compute, how much can it improve its performance on a challenging prompt?

Answering this question has implications not only on the achievable performance of LLMs, but also on the future of LLM pretraining and how one should tradeoff inference-time and pre-training compute.

Despite its importance, little research attempted to understand the scaling behaviors of various test-time inference methods. Moreover, current work largely provides negative results for a number of these strategies.

In this work, we analyze two primary mechanisms to scale test-time computation:

(1) searching against dense, process-based verifier reward models; and

(2) updating the model's distribution over a response adaptively, given the prompt at test time.

We find that in both cases, the effectiveness of different approaches to scaling test-time compute critically varies depending on the difficulty of the prompt.

This observation motivates applying a "compute-optimal" scaling strategy, which acts to most effectively allocate test-time compute adaptively per prompt.

Using this compute-optimal strategy, we can improve the efficiency of test-time compute scaling by more than 4x compared to a best-of-N baseline.

Additionally, in a FLOPs-matched evaluation, we find that on problems where a smaller base model attains somewhat non-trivial success rates, test-time compute can be used to outperform a 14x larger model.

Summary Notes

image

Figure: Summary of our main results. Left: Compute-optimal scaling for iterative self-refinement (i.e., revisions) and search. On the left, we compare the compute-optimal scaling policy for our PaLM 2-S* revision model against baselines in the revision setting (top) and the PRM search setting (bottom). We see that in the revisions case, the gap between standard best-of-N (e.g. “parallel”) and compute-optimal scaling gradually widens, enabling compute-optimal scaling to outperform best-of-N with 4× less test-time compute. Similarly, in the PRM search setting, we observe significant early improvements over best-of-N from compute-optimal scaling, nearly outperforming best-of-N with 4× less compute at points. See Sections 5 and 6 for details. Right: Comparing test-time compute and model parameter scaling. We compare the performance of compute-optimal test-time scaling with PaLM 2-S* against the performance of a ∼ 14× larger pretrained model without
additional test-time compute (e.g. greedy sampling). We consider the setting where we expect 𝑋 tokens of pretraining for both models and 𝑌 tokens of inference. By training a larger model, we effectively multiply the FLOPs requirement for both of these terms. If we were to apply
additional test-time compute with the smaller model, so as to match this larger model’s FLOPs requirement, how would it compare in terms of accuracy? We see that for the revisions (top) when 𝑌 << 𝑋, test-time compute is often preferable to additional pretraining. However, as
the inference to pretraining token ratio increases, test-time compute remains preferable on easy questions. Whereas on harder questions, pretraining is preferable in these settings. We also see a similar trend with PRM search (bottom).

Introduction: The Challenge of Efficiently Using Compute

Imagine tackling a difficult problem—naturally, you would spend more time thinking to arrive at the best solution.

This human-like approach can be instilled in LLMs to improve their accuracy on challenging prompts by using additional computation at test time. This paper delves into the potential of scaling inference-time computation and asks:

Can a fixed but non-trivial amount of test-time compute significantly enhance an LLM's performance on difficult tasks?


Key Methodologies

The researchers explore two primary mechanisms for scaling test-time computation:

  1. Search Against Dense, Process-Based Verifier Reward Models (PRMs):
    • This involves using a verifier to score each step in a generated solution, allowing the model to search for the best answer through multiple iterations.
  2. Adaptive Distribution Updates:
    • Here, the model iteratively refines its responses based on the prompt, akin to a self-improving mechanism.


Experimental Setup

The study utilized the MATH benchmark, a challenging dataset of high-school level math problems, to evaluate the efficacy of test-time compute scaling.

They employed PaLM-2 models fine-tuned specifically to either revise incorrect answers or verify the correctness of individual steps in a solution.


Main Findings

1. Compute-Optimal Scaling Strategy:

  • The effectiveness of scaling test-time compute varies significantly with the difficulty of the prompt. For easier questions, sequential revisions of the initial answers often outperformed parallel sampling. Conversely, for more complex problems, parallel sampling or a tree-search approach proved more effective.

2. FLOPs-Matched Comparison:

  • The study compared a smaller model with additional test-time compute to a 14× larger model without extra test-time compute. Remarkably, the smaller model with optimized test-time compute outperformed the larger model on problems where the base model had a non-trivial success rate.

3. Adaptive Strategy:

  • By employing a compute-optimal strategy based on the difficulty of the prompt, the efficiency of test-time compute scaling improved by more than 4× compared to a best-of-N sampling baseline.


Implications and Applications

The findings suggest a paradigm shift in the approach to enhancing LLM performance:

  • Efficient Deployment: Smaller, on-device models could be used in place of larger datacenter-scale models by leveraging additional test-time compute.
  • Self-Improving Agents: Automating the generation of improved outputs during inference paves the way for creating self-improving AI agents with reduced human supervision.
  • Cost-Effective Scaling: Instead of investing heavily in pretraining larger models, optimizing test-time compute offers a cost-effective alternative for many applications.


Conclusion

This research opens new avenues in the field of AI by demonstrating that scaling test-time computation can be more effective than simply increasing model parameters.

As we move forward, the balance between pretraining and inference compute will play a crucial role in the development of more efficient and capable AI systems.


Quote from the Paper:
"By appropriately allocating test-time compute, we are able to greatly improve test-time compute scaling, surpassing the performance of a best-of-N baseline while using about 4x less computation."


Future Directions

While the study provides promising results, it also highlights areas for future research:

  • Combining Techniques: Exploring the combination of PRM tree-search with revision models or other techniques like critique and revise.
  • Efficient Difficulty Estimation: Developing methods to quickly and accurately estimate the difficulty of a problem to optimize test-time compute allocation.
  • Iterative Self-Improvement: Investigating how the outputs from additional test-time compute can be distilled back into the base model to create a continuous self-improvement loop.

The insights from this research suggest that the future of AI might lie not just in building larger models, but in smarter, more efficient use of computation during inference.

Read more