Prover-Verifier Games improve legibility of LLM outputs

Prover-Verifier Games improve legibility of LLM outputs
Photo by fabio / Unsplash


Original Paper: https://arxiv.org/abs/2407.13692

By: Jan Hendrik KirchnerYining ChenHarri EdwardsJan LeikeNat McAleeseYuri Burda

Abstract:

One way to increase confidence in the outputs of Large Language Models (LLMs) is to support them with reasoning that is clear and easy to check – a property we call legibility.

We study legibility in the context of solving grade-school math problems and show that optimizing chain-of-thought solutions only for answer correctness can make them less legible.

To mitigate the loss in legibility, we propose a training algorithm inspired by the Prover-Verifier Game from Anil et al. (2021).

Our algorithm iteratively trains small verifiers to predict solution correctness, "helpful" provers to produce correct solutions that the verifier accepts, and "sneaky" provers to produce incorrect solutions that fool the verifier.

We find that the helpful prover's accuracy and the verifier's robustness to adversarial attacks increase throughout training.

Furthermore, we show that legibility training transfers to time-constrained humans tasked with verifying solution correctness.

Throughout LLM training human accuracy increases when checking the helpful prover's solutions, and decreases when checking the sneaky prover's solutions. Hence, training for checkability by small verifiers is a plausible technique for increasing output legibility.

Our results suggest legibility training against small verifiers as a practical avenue for increasing the legibility of large LLMs to humans, and thus could help with the alignment of superhuman models.

Summary Notes

image

Figure 2:Checkability training produces legible and well structured solutions. Three sample solutions produced by the helpful prover from different rounds of checkability training.

image

Figure 3:Schematic illustrating the legibility problem. A poorly aligned, superhumanly capable AI might produce highly efficient code that sacrifices legibility for conciseness or performance. Code is slightly adapted from the bitonic sorter for parallelized sorting of elements in a list (Wikipedia contributors, 2023).

In the rapidly advancing field of artificial intelligence, ensuring the reliability and transparency of AI outputs is paramount, especially as these systems are increasingly deployed in high-stakes environments.

A recent research paper from OpenAI explores a novel approach to enhancing the legibility of outputs from Large Language Models (LLMs) using a concept inspired by Prover-Verifier Games (PVG).

This blog post will unpack the methodologies, findings, and implications of this innovative research for an engineering audience.

Introduction: The Trust Challenge in AI Outputs

As LLMs find applications in critical areas ranging from healthcare to legal analytics, the need for outputs that are not only accurate but also legible and verifiable becomes crucial.

The primary challenge addressed by the research is how to ensure that solutions provided by LLMs can be easily checked and trusted by humans.

Existing methods like Reinforcement Learning from Human Feedback (RLHF) have limitations, prompting the exploration of alternative strategies.

Methodology: Prover-Verifier Game Framework

The research introduces a training algorithm inspired by the Prover-Verifier Game. The core idea is to iteratively train a pair of models: a "prover" that generates solutions and a "verifier" that checks these solutions for correctness.

The prover can take on two roles – helpful or sneaky – to either produce correct solutions or intentionally incorrect ones that attempt to fool the verifier.

This setup mimics a game where the verifier's goal is to accurately judge the correctness of the prover’s outputs.

Key Steps in the Methodology:

  1. Initialization: Both the prover and verifier are initialized with pre-trained LLMs lightly fine-tuned on a diverse set of mathematical problems.
  2. Verifier Training: The verifier is trained using supervised learning to predict the correctness of solutions from a balanced mixture of helpful and sneaky provers.
  3. Prover Training: The prover is trained using reinforcement learning, optimizing for a reward that combines solution correctness and the verifier’s acceptance.

The algorithm iteratively alternates between training the verifier and the prover, refining the capabilities of both models over multiple rounds.

Findings: Balancing Accuracy and Legibility

The research presents several significant findings:

  1. Direct Optimization for Correctness vs. Legibility: Training provers solely for correctness can lead to solutions that, while accurate, are difficult for humans to evaluate quickly.
  2. Effectiveness of Checkability Training: The proposed checkability training method achieves a balance where solutions are both reasonably accurate and legible. This is illustrated by the trade-off known as the "legibility tax," where some accuracy is sacrificed to enhance legibility.
  3. Robustness of Verifiers: Verifiers trained through this iterative process become increasingly adept at identifying subtly incorrect solutions generated by the sneaky prover.
  4. Human Legibility: Interestingly, the legibility improvements for smaller LLM verifiers also transfer to human evaluators. Solutions that are more legible to smaller verifiers are also easier for humans to assess correctly.

Statistical Support:

  • Figure 1 in the research shows that optimizing for solution correctness alone leads to high accuracy but poor legibility, whereas checkability training maintains high legibility with a moderate accuracy level.
  • Human evaluation experiments demonstrated that time-constrained humans could more accurately judge the correctness of solutions from the checkability-trained prover compared to those optimized solely for correctness.

Implications and Applications

The implications of this research are far-reaching:

  1. Enhanced Trust in AI Systems: By making LLM outputs more legible, we can increase human trust in AI systems, which is crucial for their adoption in sensitive domains.
  2. Scalable Oversight: The use of smaller LLM verifiers as proxies for human judges provides a scalable method for improving the legibility of AI outputs without relying exclusively on human feedback.
  3. Future AI Alignment: As LLMs continue to evolve towards superhuman capabilities, training them to produce legible and verifiable outputs will be essential for ensuring alignment with human values and expectations.

Potential applications include:

  • Education: AI tutors that provide clear and understandable step-by-step solutions to students.
  • Healthcare: AI systems that generate comprehensible and verifiable medical recommendations.
  • Legal: AI tools that assist in legal reasoning and documentation by producing legible and thorough explanations.

Conclusion: Towards Trustworthy AI

The research by OpenAI highlights a promising approach to addressing the legibility and trustworthiness of AI outputs. By leveraging the Prover-Verifier Game framework, the study demonstrates that it is possible to train LLMs to produce solutions that are not only correct but also easily verifiable by humans and smaller AI models. This advancement is a crucial step towards the broader goal of aligning AI systems with human needs and values.

As we look to the future, further exploration into semi-supervised or unsupervised methods and extending these techniques to more complex domains will be essential. The insights gained from this research provide a foundation for developing AI systems that can be trusted to operate transparently and reliably in the real world.

Read more