ASCII art elicits harmful responses from 5 major AI chatbots
ASCII art elicits harmful responses from 5 major AI chatbots

arstechnica.com
ASCII art elicits harmful responses from 5 major AI chatbots

ArtPrompt is what’s known as a jailbreak, a class of AI attack that elicits harmful behaviors from aligned LLMs, such as saying something illegal or unethical. Prompt injection attacks trick an LLM into doing things that aren't necessarily harmful or unethical but override the LLM's original instructions nonetheless.