research-papers

Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting

research-papers

Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting

Original Paper: https://arxiv.org/abs/2310.11324 By: Melanie Sclar, Yejin Choi, Yulia Tsvetkov, Alane Suhr Abstract: As large language models (LLMs) are adopted as a fundamental component of language technologies, it is crucial to accurately characterize their performance. Because choices in prompt design can strongly influence model behavior,

By Athina AI
Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning

research-papers

Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning

Original Paper: https://arxiv.org/abs/2310.11397 By: Rui Wen, Tianhao Wang, Michael Backes, Yang Zhang, Ahmed Salem Abstract: Large Language Models (LLMs) are powerful tools for natural language processing, enabling novel applications and user experiences. However, to achieve optimal performance, LLMs often require adaptation with private data, which

By Athina AI