Weight-Decomposed Low-Rank Adaptation (DoRA)

Weight-Decomposed Low-Rank Adaptation (DoRA)
Photo by Hal Gatewood / Unsplash

Introduction to DoRA: Weight-Decomposed Low-Rank Adaptation

The challenge of efficiently fine-tuning pre-trained models for specific tasks has become increasingly significant. As models grow larger and more complex, the traditional approach of full fine-tuning (FT) becomes prohibitively expensive in terms of computational resources.

To tackle this, parameter-efficient fine-tuning (PEFT) methods, like Low-Rank Adaptation (LoRA), have emerged. LoRA reduces the number of trainable parameters without increasing inference costs, but it doesn't fully match the learning capacity of FT.

Enter DoRA (Weight-Decomposed Low-Rank Adaptation), a novel approach that enhances LoRA's learning capacity and stability while retaining efficiency. By decomposing pre-trained weights into magnitude and direction components, DoRA enables targeted updates, minimizing trainable parameters and mimicking FT's learning capabilities.

In this post, weโ€™ll explore how DoRA works, delve into its weight decomposition, and assess its potential to transform AI model adaptation. By combining the strengths of LoRA and full FT, DoRA offers a promising solution to modern AI challenges, improving performance without added inference latency.

Background: Tracing the Evolution of Parameter-Efficient Fine-Tuning (PEFT)

In the quest to optimize large-scale model fine-tuning, PEFT methods have emerged as pivotal solutions.

These methods significantly reduce the computational and financial burdens of adapting pre-trained models to specific tasks.

Among the numerous approaches, PEFT methods are broadly categorized into three primary types: Adapter-based, Prompt-based, and Low-Rank Adaptation methods.

  1. Adapter-Based Methods: These methods introduce additional trainable modules into the original frozen backbone of the model. The approach primarily involves integrating linear modules in sequence or parallel to existing layers, as demonstrated by many
    researchers. While effective, these methods often incur increased inference latency due
    to the added complexity.
  2. Prompt-Based Methods: This category enhances the initial input with extra soft tokens or prompts, focusing exclusively on fine-tuning these new elements. It remains sensitive to initialization, which can impact effectiveness.
  3. Low-Rank Adaptation (LoRA): LoRA and its variants are notable for fine-tuning models without adding extra inference burdens. By applying low-rank matrices to approximate weight updates, LoRA allows for efficient fine-tuning that can be seamlessly merged with pre-trained weights.
  4. This category has seen innovations like SVD decomposition and orthogonal factorization, showcasing the versatility and adaptability of the approach.
  5. DoRA, a Weight-Decomposed Low-Rank Adaptation, builds upon these foundations, particularly leveraging the strengths of LoRA.
  6. By decomposing weight updates into magnitude and directional components, DoRA refines the fine-tuning process, promising enhanced learning capacity similar to full fine-tuning (FT) yet retaining the efficiency hallmark of LoRA-based methods.
  7. Through this innovative approach, DoRA aims to close the performance gap between LoRA and FT, offering a more robust and scalable solution for AI model adaptation.

Exploring the Patterns of LoRA and Full Fine-Tuning: A Deep Dive

image

Figure 1. An Overview of DoRA

Figure 1 illustrates the DoRA methodology, which decomposes pre-trained weights into magnitude and direction components. This decomposition allows for more precise updates, particularly in the direction component, using LoRA techniques. The notation | |โ‹…| |๐‘ represents the vector-wise norm across each column vector, crucial for normalizing the direction component and maintaining scale consistency during adaptation. This stabilization enhances the model's adaptability while being resource-efficient. By addressing the limitations of traditional LoRA, DoRA allows for nuanced parameter adjustments, bridging the gap between parameter efficiency and full fine-tuning. This approach improves learning capacity and training stability, offering a high-performance alternative that boosts model performance across various applications without extra computational costs.

Low-Rank Adaptation (LoRA)

LoRA has emerged as a prominent approach in parameter-efficient fine-tuning methods.

The fundamental idea behind LoRA is based on the hypothesis that updates made during fine-tuning exhibit a low โ€œintrinsic rank.โ€

To implement this, LoRA uses the product of two low-rank matrices to update pre-trained weights incrementally.

For instance, consider a pre-trained weight matrix ๐‘Š0 of dimensions ๐‘‘ร—๐‘˜. LoRA models the weight update ๐›ฅ๐‘Š using a low-rank decomposition, represented as ๐ต๐ด, where ๐ต and ๐ด are low-rank matrices with dimensions ๐‘‘ร—๐‘Ÿ and ๐‘Ÿร—๐‘˜, respectively, and ๐‘Ÿ is much smaller than both ๐‘‘ and ๐‘˜.

This decomposition allows the fine-tuned weight ๐‘Šโ€ฒ to be expressed as:

๐‘Šโ€ฒ=๐‘Š0+๐›ฅ๐‘Š=๐‘Š0+๐ต๐ด

Here,๐‘Š0 remains static during the fine-tuning process, and the parameters in ๐ด and ๐ต are the ones being trained. LoRA's ability to merge learned updates with the pre-trained weight matrix before deployment ensures no additional latency during inference, making it efficient for large-scale models.

Weight Decomposition Analysis

The study of LoRA reveals that it can be seen as a general approximation of full fine-tuning (FT). By increasing the rank ๐‘Ÿ to match the rank of pre-trained weights, LoRA can theoretically achieve a level of expressiveness similar to FT.

However, the discrepancy in accuracy between LoRA and FT often arises due to the limited number of trainable parameters.

To address this, a novel weight decomposition analysis is introduced, inspired by Weight Normalization.

This analysis reparameterizes the weight matrix into magnitude and direction components, unveiling the inherent differences in LoRA and FT learning patterns. For a weight matrix ๐‘Š, this decomposition is formulated as: ๐‘Š=๐‘š๐‘‰ , where ๐‘š is the magnitude vector, and ๐‘‰ is the directional matrix.

This decomposition ensures that each column of ๐‘‰ is a unit vector, with the scalar in ๐‘š defining the magnitude of each vector.

The analysis highlights the distinct learning behaviors of each method by analyzing the updates in both magnitude and direction of LoRA and FT weights relative to pre-trained weights.

This approach suggests that while LoRA tends to make proportional changes in direction and magnitude, FT exhibits more nuanced learning patterns.

Specifically, FT can perform slight directional changes alongside significant magnitude alterations, which LoRA lacks due to its simultaneous learning of both components.

Through weight decomposition analysis, it becomes evident that LoRAโ€™s limitations may stem from its complexity in learning concurrent magnitude and directional adaptations.

This insight drives the development of DoRA, a variant that aims to emulate FTโ€™s learning pattern more closely, potentially enhancing LoRAโ€™s learning capacity.

This comprehensive examination of LoRA and FT patterns sheds light on the strengths and weaknesses of current fine-tuning methods and paves the way for innovations like DoRA that promise to bridge the gap between parameter efficiency and learning capacity.

image

Figure 2. Magnitude and direction updates of (a) FT, (b) LoRA, and (c) DoRA of the query matrices across different layers and intermediate steps.

Figure 2 illustrates the differences in learning patterns between Full Fine-Tuning (FT), Low-Rank Adaptation (LoRA), and DoRA by examining the magnitude (ฮ”M) and direction (ฮ”D) updates of query matrices across various layers and training steps.

In Figure 2(a), representing FT, the plot shows a relatively negative slope, indicating that FT can make nuanced adjustments where changes in direction and magnitude are not strictly proportional. This flexibility allows FT to perform slight directional changes alongside significant magnitude alterations, showcasing its robust learning capability.

Figure 2(b), depicting LoRA, reveals a consistent
positive slope. This trend suggests that LoRA adjusts direction and magnitude proportionally, lacking the subtlety seen in FT. LoRA's updates are more linear, which may limit its ability to make complex adaptations.

Figure 2(c), illustrating DoRA, shows a pattern more similar to FT, with a negative slope. This indicates that DoRA can achieve more sophisticated learning patterns like FT by decoupling magnitude and direction updates. DoRA's approach allows for significant directional changes with minimal magnitude adjustments, enhancing its learning capacity over LoRA. Overall, the image highlights how DoRA bridges the gap between LoRA and FT, combining efficiency with the nuanced learning capabilities of FT. This makes DoRA a promising parameter-efficient fine-tuning method capable of handling complex tasks with improved adaptability.

Methodology: Unpacking DoRA's Innovative Approach

Weight-Decomposed Low-Rank Adaptation

Building on the insights obtained from weight decomposition analysis, DoRA introduces a groundbreaking approach to fine-tuning known as Weight-Decomposed Low-Rank Adaptation.

This method begins by dissecting the pre-trained weight into two critical components: magnitude and direction.

Both components are subject to fine-tuning, but the directional component, which is substantial in terms of parameter volume, is further decomposed using LoRA for efficient fine-tuning.

This dual-focus approach allows DoRA to optimize both components separately, enhancing learning capacity without the additional inference overhead typically associated with full fine-tuning (FT).

The rationale behind this approach is twofold. Firstly, by concentrating LoRA specifically on directional adaptation while allowing the magnitude to remain tunable, DoRA simplifies the task compared to LoRA's original dual adaptation approach.

This specialization makes the learning process more stable and manageable. Secondly, the decomposition facilitates more stable optimization of directional updates, leveraging weight decomposition to streamline the process.

Implementation Details

DoRAโ€™s implementation diverges from traditional weight normalization techniques, which train both components from scratch and are sensitive to initialization.

Instead, DoRA begins with pre-trained weights, thus bypassing initialization challenges. It initializes the pre-trained weight ๐‘Š0 with components ๐‘š=| |๐‘Š0| |๐‘ and ๐‘‰=๐‘Š0, where ๐‘‰ is frozen and ๐‘š remains a trainable vector.

The directional component is then updated through LoRA, ensuring that the adapted weight ๐‘Šโ€ฒ maintains the relationship:
๐‘Šโ€ฒ=๐‘š๐‘‰+ฮ”๐‘‰||๐‘‰+ฮ”๐‘‰||๐‘=๐‘š๐‘Š0+๐ต๐ด||๐‘Š0+๐ต๐ด||๐‘


Here, ๐›ฅ๐‘‰=๐ต๐ด represents the incremental directional update learned through the product of two low-rank matrices ๐ต and ๐ด, which are initialized using LoRAโ€™s strategy to ensure that ๐‘Šโ€ฒ equals ๐‘Š0 before fine-tuning, this setup enhances learning capacity and allows DoRA to merge back into the pre-trained weight without adding inference latency.

Figure 2 visually compares the magnitude and directional differences between DoRA, LoRA, and FT across different layers.

Unlike LoRA, which shows a consistent positive slope indicating a proportional relationship between direction and magnitude changes, DoRA and FT exhibit a distinct negative slope.

This pattern suggests that DoRA can achieve substantial directional adjustments with minimal magnitude changes, aligning its learning capacity more closely with FT.

Such insights underline DoRA's superior learning ability, offering a robust alternative to fine-tuning methods like LoRA.

Experiments: Evaluating DoRAโ€™s Effectiveness Across Domains

The effectiveness of DoRA is rigorously tested through a series of experiments spanning various domains, including language, image, and video tasks.

These experiments aimed to validate DoRA's performance against other Parameter-Efficient Fine-Tuning (PEFT) methods and demonstrate its versatility and superiority.

Commonsense Reasoning

DoRA was evaluated on LLaMA models (7B/13B) for commonsense reasoning tasks, a key area where it outperformed several baseline methods, including Prompt Learning and Adapter-based methods.

For instance, DoRA enhanced the accuracy of LLaMA-7B by 3.7% over LoRA, showcasing its improved learning capacity. Even when the rank was halved, DoRA surpassed LoRAโ€™s performance, highlighting its efficiency with fewer trainable parameters.

Image and Video-Text Understanding

In the era of multi-modal tasks, DoRA was tested on VL-BART across various image and video-text datasets.

The results showed that DoRA consistently outperformed LoRA in accuracy while maintaining a similar number of trainable parameters.

Notably, DoRA achieved nearly 1% higher accuracy for image-text tasks and approximately 2% more for video-text tasks, proving its robustness in handling complex multi-modal inputs.

Visual Instruction Tuning

DoRA was also applied to larger models, such as LLaVA-1.5-7B, for visual instruction tuning.

Even in scenarios where full fine-tuning (FT) often leads to overfitting, DoRA outperformed both LoRA and FT, achieving an average improvement of 0.7% over LoRA.

This demonstrates DoRA's ability to enhance model performance without the drawbacks of excessive parameter tuning.

Compatibility with LoRA Variants

The experiments extended to exploring DoRAโ€™s compatibility with other LoRA variants, particularly VeRA.

By integrating VeRA into DoRA, resulting in DVoRA, the approach combined the strengths of both methods, achieving comparable or superior performance to LoRA with significantly fewer parameters.

This compatibility underscores DoRAโ€™s flexibility and potential for broad application across different PEFT strategies.

Overall, experiments highlight DoRAโ€™s capability to consistently outperform traditional methods in single and multi-modal tasks, establishing it as a powerful tool for efficient model fine-tuning across diverse applications.

Broader Impacts and Conclusion: The Reach and Future of DoRA

The development of DoRA signifies a transformative step in parameter-efficient fine-tuning, offering a method that maintains efficiency without sacrificing performance.

One of the most significant broader impacts of DoRA is its potential to democratize access to advanced AI models.

By reducing the computational resources required for fine-tuning, DoRA enables more institutions and individuals with limited resources to adapt large-scale models for their specific needs, fostering innovation across diverse fields such as education, healthcare, and environmental science.

Moreover, DoRA's adaptability with various LoRA variants, such as VeRA, showcases its versatility, allowing it to be integrated into existing frameworks with minimal adjustments.

This flexibility enhances its applicability and ensures that it can continue evolving alongside advancements in AI technologies.

Furthermore, in the context of AI ethics, DoRA's efficiency could lead to more sustainable AI development practices by minimizing the carbon footprint associated with model training and fine-tuning.


DoRA emerges as a compelling advancement in model adaptation, addressing the limitations of previous parameter-efficient methods.

By leveraging a novel weight decomposition analysis, DoRA bridges the gap between LoRA and full fine-tuning (FT), achieving a learning capacity that mirrors FT while maintaining the efficiency hallmark of LoRA.

This capability is validated across diverse tasks, from commonsense reasoning to visual instruction tuning, where DoRA consistently outperforms traditional methods without incurring additional inference latency.

The compatibility of DoRA with other PEFT methods, as demonstrated with VeRA, further underscores its potential to serve as a cornerstone for future advancements in efficient model training.

As AI continues to evolve, methods like DoRA will be crucial in ensuring that the benefits of AI technology are accessible, sustainable, and capable of addressing the complex challenges of tomorrow.

Looking ahead, the exploration of DoRA's applicability beyond language and vision, particularly in audio and other domains, presents exciting opportunities for further research and innovation.

References:

[1] S.-Y. Liu et al., โ€œDoRA: Weight-Decomposed Low-Rank Adaptation,โ€ Jul. 09, 2024, arXiv: arXiv:2402.09353. doi: 10.48550/arXiv.2402.09353.
[2] Y. Mao, K. Huang, C. Guan, G. Bao, F. Mo, and J. Xu, โ€œDoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution,โ€ Jun. 26, 2024, arXiv: arXiv:2405.17357. doi: 10.48550/arXiv.2405.17357.

Read more