
As millions turn to AI for mental health guidance, a hidden flaw is quietly distorting the advice they receive. We’re not talking about the well-publicized issue of AI “hallucinations,” but something more insidious: semantic leakage. This phenomenon, where an irrelevant word from earlier in a conversation can taint the AI’s subsequent responses, poses a significant risk in the sensitive context










