Security Researchers Use Poetry To Jailbreak AI Models

Security Researchers Use Poetry To Jailbreak AI Models

When prompts were presented in poetic rather than prose form, attack success rates increased from 8% to 43%, on average — a fivefold increase.

Source: Dark Reading