research-papers
Prompt Packer: Deceiving LLMs through Compositional Instruction with Hidden Attacks
Original Paper: https://arxiv.org/abs/2310.10077 By: Shuyu Jiang, Xingshu Chen, Rui Tang Abstract: Recently, Large language models (LLMs) with powerful general capabilities have been increasingly integrated into various Web applications, while undergoing alignment training to ensure that the generated content aligns with user intent and ethics. Unfortunately,