Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation

Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation
Photo by Google DeepMind / Unsplash


Original Paper: https://arxiv.org/abs/2307.15337

By: Xuefei NingZinan LinZixuan ZhouZifu WangHuazhong YangYu Wang

Abstract:

This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs.

In this work, motivated by the thinking and writing process of humans, we propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel.

Not only does SoT provide considerable speed-ups across 12 LLMs, but it can also potentially improve the answer quality on several question categories.

SoT is an initial attempt at data-centric optimization for inference efficiency, and showcases the potential of eliciting high-quality answers by explicitly planning the answer structure in language.

Summary Notes

image

Blog Post: Enhancing Accessibility in HTML Conversions

In an era where digital accessibility is crucial, initiatives aimed at improving the accessibility of research papers are vital. arXiv's efforts to enhance HTML conversions of research papers highlight the importance of making academic research accessible to all, including individuals with disabilities. This guide is intended for AI Engineers and developers in enterprise companies on how to report conversion errors and contribute to these accessibility efforts.

Reporting Errors: A How-To

Encountering errors in HTML conversions can hinder accessibility. Your reports are crucial for enhancing both accessibility and mobile support. Here’s how you can report these issues effectively:

  • Use the "Report Issue" Button: This is the most direct method to report errors, helping improve the HTML conversion process.
  • Keyboard Shortcut: For keyboard navigation users, “Ctrl + ?” opens a feedback form to report errors smoothly.
  • "Report Issue for Selection" Button: To report errors in a specific text section, this method allows for precise highlighting.
  • Toggle Accessible Reporting Links: With “Alt+Y” and “Alt+Shift+Y”, these links make reporting accessible for everyone.

Recognizing Known Issues

The arXiv team is aware of certain issues but depends on your reports for new or unforeseen problems. Your insights:

  • Enable quick identification of new issues.
  • Inform the team about areas needing immediate attention.
  • Enhance the quality and accessibility of the HTML conversions.

The Importance of Feedback

Your feedback is not just about fixing errors; it supports the broader goal of making research accessible to everyone. Your contributions:

  • Improve content accessibility for a more inclusive research community.
  • Encourage community contributions toward accessible research.

Developer Contributions

For AI Engineers and developers with a knack for technical work, contributing to LaTeX packages conversion is a valuable way to support accessibility efforts. Here’s how you can contribute:

  • Engage with LaTeXML: Contributing to this project helps improve HTML conversions. Your expertise in LaTeX can significantly impact making research accessible to a broader audience.
  • Participate in the Initiative: Your involvement supports a larger goal of removing barriers to research accessibility.

Conclusion

arXiv’s initiative to enhance HTML conversions of research papers is a crucial step towards making scientific knowledge accessible to all. Whether through error reporting or contributing to LaTeX package development, your participation is vital.

We encourage AI Engineers and developers to join these efforts, helping ensure that the wealth of research available is accessible to everyone, thereby promoting an inclusive environment for scientific exploration and innovation.

Read more