Graph of Thoughts: Solving Elaborate Problems with Large Language Models

Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Photo by Google DeepMind / Unsplash


Original Paper: https://arxiv.org/abs/2308.09687v2

By: Maciej BestaNils BlachAles KubicekRobert GerstenbergerLukas GianinazziJoanna GajdaTomasz LehmannMichal PodstawskiHubert NiewiadomskiPiotr NyczykTorsten Hoefler

Abstract:

We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT).

The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices.

This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops.

We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%.

We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes.

This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.

Summary Notes

image

Simplifying Complex Problem-Solving with Graph of Thoughts (GoT) in Large Language Models

The field of artificial intelligence (AI) is witnessing rapid advancements, especially with Large Language Models (LLMs) like GPT and PaLM.

These models have shown great potential in handling various tasks through innovative prompt engineering. A new development, the Graph of Thoughts (GoT) framework, is setting a new standard for complex problem-solving with AI.

This approach models reasoning as a graph, connecting different thoughts in a dynamic and human-like manner, providing a significant boost to AI engineers at enterprise companies.

Understanding the Basics

In traditional setups, LLMs generate responses based on user prompts. The GoT framework takes this a step further by introducing a structured way to represent these interactions. Here, thoughts (anything from a paragraph to code) are represented as vertices in a graph, and the connections between them as edges. This structure allows for a more nuanced and adaptable approach to problem-solving.

Key Features of GoT

The GoT framework is built around four main components:

  • Graph of LLM reasoning (G): This is the overall structure that holds the thoughts and their connections.
  • Transformations (T): This refers to the changes applied to thoughts, allowing for complex operations like combining or refining thoughts.
  • Scoring and Ranking (E & R): This process evaluates and prioritizes thoughts, ensuring that the most relevant ones are selected for further processing.

Architectural Overview

GoT is designed to be both flexible and scalable, consisting of several key modules:

  • Prompter: Manages the initial input to the system.
  • Parser: Breaks down and interprets the input.
  • Scoring: Assesses the relevance and usefulness of thoughts.
  • Controller: Guides the overall process, managing the graph's dynamics.

Its API is also designed to be extensible, allowing easy integration with various LLM models and customization to fit specific needs, from prototyping to full-scale system integration.

Practical Applications

GoT excels in a variety of tasks, thanks to its versatility:

  • Sorting Tasks: It can efficiently manage sorting by breaking it down into simpler subtasks.
  • Set Operations: GoT improves the handling of sets by simplifying complex operations into easier steps.
  • Complex Tasks: For multi-stage tasks like keyword counting or document merging, GoT demonstrates its capability to manage intricate reasoning processes.

Performance and Impact

GoT has outperformed existing models in multiple areas, offering a more complex and adaptable approach to reasoning. It introduces 'thought volume' as a new metric, helping to quantify the depth and complexity of the reasoning processes it enables, showcasing its potential to change the way we use LLMs for problem-solving.

Wrapping Up

The Graph of Thoughts framework represents a significant leap forward in utilizing Large Language Models for complex problem-solving.

By mimicking human-like reasoning with a graph-based approach, it allows for dynamic operations such as merging, backtracking, and branching.

Its adaptability and scalability make it an invaluable tool for AI engineers, pushing the limits of automated reasoning systems.

In summary, GoT is not just an advancement in AI; it's a groundbreaking method that reimagines the capabilities of Large Language Models.

It offers a promising look into the future of sophisticated reasoning and problem-solving in AI for engineers and researchers.

Read more