research-papers
RAFT: Adapting Language Model to Domain Specific RAG
Original Paper: https://arxiv.org/abs/2403.10131 By: Tianjun Zhang, Shishir G. Patil, Naman Jain, Sheng Shen, Matei Zaharia, Ion Stoica, Joseph E. Gonzalez Abstract: Pretraining Large Language Models (LLMs) on large corpora of textual data is now a standard paradigm. When using these LLMs for many downstream applications,