
Advancements in AI have led to groundbreaking developments but not without new risks. Researchers at Anthropic have sounded the alarm about a vulnerability in complex AI systems known as “many-shot jailbreaking.” This flaw becomes evident when a sequence of seemingly harmless prompts triggers a large language model (LLM) into bypassing its own safety protocols. The AI can then potentially reveal