From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future
Original Paper: https://arxiv.org/abs/2408.02479
By: Haolin Jin, Linghan Huang, Haipeng Cai, Jun Yan, Bo Li, Huaming Chen
Abstract:
With the rise of large language models (LLMs), researchers are increasingly exploring their applications in var ious vertical domains, such as software engineering.
LLMs have achieved remarkable success in areas including code generation and vulnerability detection. However, they also exhibit numerous limitations and shortcomings.
LLM-based agents, a novel tech nology with the potential for Artificial General Intelligence (AGI), combine LLMs as the core for decision-making and action-taking, addressing some of the inherent limitations of LLMs such as lack of autonomy and self-improvement.
Despite numerous studies and surveys exploring the possibility of using LLMs in software engineering, it lacks a clear distinction between LLMs and LLM based agents.
It is still in its early stage for a unified standard and benchmarking to qualify an LLM solution as an LLM-based agent in its domain. In this survey, we broadly investigate the current practice and solutions for LLMs and LLM-based agents for software engineering.
In particular we summarise six key topics: requirement engineering, code generation, autonomous decision-making, software design, test generation, and software maintenance.
We review and differentiate the work of LLMs and LLM-based agents from these six topics, examining their differences and similarities in tasks, benchmarks, and evaluation metrics.
Finally, we discuss the models and benchmarks used, providing a comprehensive analysis of their applications and effectiveness in software engineering.
We anticipate this work will shed some lights on pushing the boundaries of LLM-based agents in software engineering for future research.
Summary Notes
Introduction
In the ever-evolving landscape of software engineering, the integration of artificial intelligence has brought forth unprecedented advancements.
Large Language Models (LLMs) and LLM-based agents have emerged as powerful tools capable of revolutionizing various facets of software engineering.
From automating code generation to enhancing software security, these models are setting new benchmarks for efficiency and innovation.
This blog post delves into the applications, methodologies, and implications of LLMs and LLM-based agents in software engineering.
Code Generation and Software Development
The Potential of LLMs
LLMs have shown remarkable capabilities in automating code generation, debugging, and even project documentation.
Tools like OpenAI's Codex and GitHub Copilot are prime examples, integrating LLMs to provide real-time code suggestions and automate repetitive tasks.
Studies show that developers using GitHub Copilot complete coding tasks 55.8% faster, demonstrating the significant impact of LLMs on productivity.
Advanced Techniques
One notable technique is "print debugging," where LLMs track variable values and execution flows to improve code generation accuracy.
This method, particularly effective for medium-difficulty problems on Leetcode, improved performance by 17.9%.
Another innovative approach is the Cycle framework, which enables LLMs to learn from execution feedback, enhancing code generation performance by 63.5%.
LLM-Based Agents: The Next Frontier
LLM-based agents extend the capabilities of traditional LLMs by incorporating decision-making and interactive problem-solving functions.
The self-collaboration framework, for instance, employs multiple ChatGPT (GPT-3.5-turbo) agents in roles like analysis, coding, and testing, significantly improving code generation quality.
The framework achieved a 29.9% improvement in HumanEval benchmarks compared to state-of-the-art models.
Autonomous Learning and Decision Making
LLMs in Decision Making
LLMs have advanced the field of autonomous learning and decision-making by enabling automated data analysis and model building.
Techniques like voting inference systems optimize performance by leveraging multiple LLM calls, showing a non-linear relationship between the number of calls and system performance.
Creativity and Reasoning
LLMs also exhibit promising results in generating creative content. Studies have shown that models like GPT-3.5 and GPT-4 can simulate human decision-making processes, acting as judges to evaluate other LLM-driven chat assistants.
These models have demonstrated consistency with human judgments, providing new avenues for automated evaluation and optimization.
Multi-Agent Collaboration
LLM-based agents excel in multi-agent collaboration, enhancing task efficiency and effectiveness. The AGENTVERSE framework, for instance, leverages GPT-4 to improve task completion through group dynamics and collaborative problem-solving.
This framework not only excels in individual task performance but also significantly enhances group collaboration in areas like coding and tool usage.
Software Security and Maintenance
Vulnerability Detection
LLMs like WizardCoder and ContraBERT have been fine-tuned to enhance the accuracy of source code vulnerability detection.
WizardCoder, for example, achieved state-of-the-art performance in Java function vulnerability detection, improving ROC AUC from 0.66 to 0.69.
Automated Program Repair
In automated program repair, frameworks like NAVRepair and MOREPAIR have shown significant improvements. NAVRepair, designed for C/C++ code vulnerabilities, achieved a 26% improvement in repair accuracy. MOREPAIR, using techniques like QLoRA and NEFTune, improved repair rates by 11% and 8% on evalrepair-C++ and EvalRepair-Java benchmarks, respectively.
Penetration Testing
LLM-based agents have also proven effective in penetration testing. PENTESTGPT, an LLM-driven automatic penetration testing tool, demonstrated a 228.6% higher task completion rate than GPT-3.5. Similarly, frameworks like GPTLENS achieved a 76.9% success rate in detecting smart contract vulnerabilities, highlighting the potential of LLM-based agents in cybersecurity.
Benchmarks and Evaluation Metrics
Common Benchmarks
LLMs and LLM-based agents are evaluated using various benchmark datasets. Common benchmarks include HumanEval, MBPP, and Defects4J for tasks like code generation and vulnerability detection. For more specialized tasks, datasets like FEVER and HotpotQA are used to test knowledge-intensive reasoning and question-answering capabilities.
Evaluation Metrics
Evaluation metrics commonly used include success rate, ROC AUC, F1 score, and accuracy. For LLM-based agents, additional metrics like task completion time, computational resource consumption, and collaborative effectiveness are considered. These metrics provide a comprehensive evaluation of the models' performance and practical application effectiveness.
Conclusion
The integration of LLMs and LLM-based agents in software engineering has opened new horizons for automation, efficiency, and innovation.
From code generation to software security, these models are transforming the way developers and engineers approach complex tasks.
As research continues to evolve, the potential applications and implications of LLMs and LLM-based agents are bound to expand, pushing the boundaries of what is possible in software engineering.