Search

An LLM can Fool Itself: A Prompt-Based Adversarial Attack

An LLM can Fool Itself: A Prompt-Based Adversarial Attack