
The recent revelation of the Skeleton Key AI jailbreak attack by Microsoft’s security research team has cast a spotlight on the vulnerabilities entrenched in numerous high-profile generative AI models. These models, including Meta’s Llama3-70b-instruct, Google’s Gemini Pro, and OpenAI’s GPT-3.5 Turbo and GPT-4, are sophisticated yet susceptible to this intricate attack that bypasses their built-in safeguards. As AI technology becomes










