Following Length Constraints in Instructions

Following Length Constraints in Instructions
Photo by Joakim Honkasalo / Unsplash


Original Paper: https://arxiv.org/abs/2406.17744

By: Weizhe YuanIlia KulikovPing YuKyunghyun ChoSainbayar SukhbaatarJason WestonJing Xu

Abstract:

Aligned instruction following models can better fulfill user requests than their unaligned counterparts.

However, it has been shown that there is a length bias in evaluation of such models, and that training algorithms tend to exploit this bias by learning longer responses.

In this work we show how to train models that can be controlled at inference time with instructions containing desired length constraints.

Such models are superior in length instructed evaluations, outperforming standard instruction following models such as GPT4, Llama 3 and Mixtral.

Summary Notes

In the realm of artificial intelligence, instruction-following models are pivotal in delivering precise and user-centric responses.

However, a persistent challenge has been the models' tendency to produce length-biased responses, often favoring longer and more verbose answers.

This article delves into a novel approach, dubbed Length-Instruction Fine-Tuning (LIFT), which aims to address this issue by training models to adhere strictly to length constraints during inference.

Introduction to the Problem

Instruction-following models, such as GPT-4 and Llama, are designed to respond to user prompts effectively. However, these models often exhibit a "length bias," where both human evaluators and the models themselves tend to prefer longer responses.

This bias not only skews evaluations but also complicates training algorithms that rely on these evaluations for optimization.

A simple instruction like "Give me information about Coco Gauff" can be answered in a few sentences, paragraphs, or even a multi-page document, depending on the context.

This ambiguity in expected response length poses a significant challenge in both training and evaluation.

Key Methodologies

To tackle this challenge, the researchers introduced the concept of Length-Instruction Fine-Tuning (LIFT). The methodology involves:

  1. Augmenting Training Data: By incorporating explicit length instructions into the training prompts, the models are taught to respect these constraints.
  2. Length-Instructed Fine-Tuning (LIFT-DPO): This approach uses Direct Preference Optimization (DPO) to train models on datasets augmented with length instructions. The goal is to fine-tune models so that they can follow length instructions while maintaining high response quality.

Creating Length-Instructed Benchmarks

To evaluate the effectiveness of the proposed methodology, the researchers constructed two benchmarks:

  • AlpacaEval-LI: An extension of AlpacaEval 2 with added length instructions.
  • MT-Bench-LI: An extension of MT-Bench, including a wider range of prompts with length constraints.

Training and Evaluation

The training involved fine-tuning models using the LIFT-DPO method on datasets that included length instructions. The evaluation metrics focused on:

  • Violation Rate (Vlt%): The percentage of responses exceeding the length constraint.
  • Win Rate (Win%): The quality of responses compared to a strong baseline.

Main Findings

The results of the experiments were illuminating:

  1. State-of-the-Art Models Struggle with Length Instructions: Models like GPT-4 and Claude 3 showed high violation rates, failing to adhere to length constraints in nearly 50% of cases.
  2. LIFT-DPO Outperforms Standard Methods: Models trained using LIFT-DPO demonstrated significantly lower violation rates and improved win rates. For instance, the Llama2-70B-Base model saw a reduction in violation rate from 65.8% (standard DPO) to 7.1% (LIFT-DPO).

Robustness and Scalability

The LIFT-DPO models maintained low violation rates even when the length constraints were scaled down, showcasing their robustness.

This contrasts sharply with standard DPO and regularized DPO models, which struggled to adhere to more stringent length constraints.

Implications and Applications

The implications of this research are vast:

  • Improved User Experience: By adhering to length instructions, models can provide responses that are tailored to the user's context, whether it's a quick answer or a detailed explanation.
  • Enhanced Model Evaluation: Length-instructed benchmarks provide a more accurate assessment of a model's ability to follow instructions, reducing the bias towards longer responses.
  • Broader Applicability: The methodology can be extended to other types of length instructions, such as character limits or different phrasing, making it versatile for various applications.

Conclusion

The introduction of Length-Instruction Fine-Tuning (LIFT) marks a significant advancement in the development of instruction-following models.

By addressing the length bias inherent in current models, LIFT-DPO not only enhances the quality and relevance of AI-generated responses but also provides a robust framework for training and evaluating these models.

As AI continues to evolve, methodologies like LIFT will play a crucial role in ensuring that models are aligned with user expectations and can handle the nuanced requirements of real-world applications.

Quote from the Research Paper:
"We show that many existing state-of-the-art instruction following models fail to follow such maximum word length instructions adequately. To measure this we construct and evaluate models on length instructed versions of AlpacaEval 2 and MT-Bench." — Weizhe Yuan et al.

Potential Areas for Future Research:

  • Investigating human preferences for response lengths across different types of instructions.
  • Exploring alternative metrics for evaluating model performance under length constraints.
  • Extending the length instructions to include more varied and complex scenarios.


By addressing these areas, future research can further refine and enhance the capabilities of instruction-following models, paving the way for more intuitive and effective AI interactions.

Read more