
Recent discoveries have unveiled a significant vulnerability in widely used AI models like ChatGPT-4o, allowing them to be tricked into generating harmful exploit code. This technique, revealed by Marco Figueroa, exploits a linguistic loophole involving hex conversion, which causes the AI to process malicious content without recognizing its potential danger. Because ChatGPT-4o is optimized for natural language instructions, it fails










