New AI Jailbreak Method ‘Bad Likert Judge’ Boosts Attack Success Rates by Over 60% The Hacker [email protected] (The Hacker News)
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious …