From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future

From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future
Photo by fabio / Unsplash


Original Paper: https://arxiv.org/abs/2408.02479

By: Haolin JinLinghan HuangHaipeng CaiJun YanBo LiHuaming Chen

Abstract:

With the rise of large language models (LLMs), researchers are increasingly exploring their applications in var ious vertical domains, such as software engineering.

LLMs have achieved remarkable success in areas including code generation and vulnerability detection.

However, they also exhibit numerous limitations and shortcomings. LLM-based agents, a novel tech nology with the potential for Artificial General Intelligence (AGI), combine LLMs as the core for decision-making and action-taking, addressing some of the inherent limitations of LLMs such as lack of autonomy and self-improvement.

Despite numerous studies and surveys exploring the possibility of using LLMs in software engineering, it lacks a clear distinction between LLMs and LLM based agents. It is still in its early stage for a unified standard and benchmarking to qualify an LLM solution as an LLM-based agent in its domain.

In this survey, we broadly investigate the current practice and solutions for LLMs and LLM-based agents for software engineering.

In particular we summarise six key topics: requirement engineering, code generation, autonomous decision-making, software design, test generation, and software maintenance.

We review and differentiate the work of LLMs and LLM-based agents from these six topics, examining their differences and similarities in tasks, benchmarks, and evaluation metrics.

Finally, we discuss the models and benchmarks used, providing a comprehensive analysis of their applications and effectiveness in software engineering.

We anticipate this work will shed some lights on pushing the boundaries of LLM-based agents in software engineering for future research.

Summary Notes

image

Figure: PAPER NUMBER FOR LLMS AND LLM-BASED AGENT BETWEEN 2020-2024

Introduction


In recent years, the rise of large language models (LLMs) like GPT-4 and Codex has significantly impacted various domains, including software engineering.

These models have demonstrated impressive capabilities in handling tasks such as code generation, debugging, and documentation. However, they also have inherent limitations, such as a lack of autonomy and self-improvement.

Enter LLM-based agents—a novel approach that combines LLMs with decision-making and action-taking frameworks to overcome these limitations. This blog post explores the current practice, challenges, and future potential of LLMs and LLM-based agents in software engineering.

Key Methodologies


Large Language Models (LLMs)

LLMs leverage vast amounts of training data to generate human-like text, offering unprecedented levels of fluency and coherence. Common architectures include:

  • Encoder-Decoder: Used for tasks like machine translation.
  • Encoder-Only: Known for tasks like sentiment analysis and contextual understanding (e.g., BERT).
  • Decoder-Only: Popular for text generation and sequence prediction (e.g., GPT-4).


LLM-based Agents

LLM-based agents go beyond the capabilities of traditional LLMs by integrating decision-making and problem-solving functions.

These agents use techniques like few-shot learning and multi-turn dialogue for model fine-tuning. They also employ methods such as Retrieval-Augmented Generation (RAG) and tool utilization to perform more complex and contextually aware tasks.


Main Findings and Results


Requirement Engineering and Documentation

LLMs have shown significant potential in automating tasks like requirement elicitation, classification, and generation. For example, ChatGPT has been used to generate Software Requirement Specifications (SRS) and evaluate user stories.

However, the application of LLM-based agents in this field is still nascent. These agents can enhance the process by continuously refining and optimizing generated requirements through ongoing user feedback and interaction.


Code Generation and Software Development

LLMs have optimized various tasks in code generation and software development, such as code synthesis and debugging. Tools like GitHub Copilot, which integrates OpenAI's Codex, provide real-time code completion and suggestions.

LLM-based agents take this a step further by employing multi-agent systems to handle complex tasks. For instance, frameworks like MetaGPT simulate real-world software development processes, improving both efficiency and code quality.


Autonomous Learning and Decision Making

LLMs have been used to enhance decision-making capabilities through techniques like voting inference and self-debugging.

However, LLM-based agents excel in this area by leveraging multi-agent collaboration and dynamic problem-solving. For example, frameworks like Reflexion and ExpeL use self-reflection and language feedback to continuously improve performance across tasks.


Software Design and Evaluation

LLMs are often used to assist in design tasks like log summarization and code review. However, LLM-based agents bring more autonomy and flexibility to the table.

For example, the ChatDev framework uses role distribution to simulate a virtual chat-driven software development company, significantly increasing efficiency and reducing code vulnerabilities.


Software Test Generation

LLMs have shown promise in generating high-quality test cases, improving code coverage and bug detection. However, LLM-based agents enhance these capabilities through multi-agent collaboration.

For instance, the AgentCoder framework employs multiple specialized agents to iteratively optimize code generation and testing, achieving higher accuracy and robustness.


Software Security and Maintenance

LLMs have been extensively used for vulnerability detection, automatic repair, and penetration testing. However, LLM-based agents offer a more comprehensive approach by combining techniques from various fields.

For example, the TrustLLM framework uses multi-agent collaboration to improve the accuracy and interpretability of smart contract auditing.


Implications and Potential Applications

The integration of LLMs into agents has opened up new possibilities in software engineering. These agents can handle complex tasks like autonomous debugging, adaptive test generation, and dynamic problem-solving.

The potential applications are vast, ranging from improving software security and maintenance to enhancing the efficiency of software development processes.


Limitations and Areas for Future Research

Despite the advancements, there are still limitations to be addressed. For instance, the effectiveness of LLM-based agents in real-world applications needs more empirical validation.

Future research should focus on improving the autonomy and decision-making capabilities of these agents, as well as exploring their potential in emerging fields like autonomous driving and IoT security.


Conclusion

The evolution from LLMs to LLM-based agents marks a significant milestone in software engineering.

By combining the strengths of LLMs with decision-making and problem-solving frameworks, these agents offer a more robust and flexible approach to handling complex software engineering tasks.

As research and development continue, we can expect to see even more innovative applications and improvements in this exciting field.

Read more