AI Voice Cloning Scams: A Rising Menace Confounding US Authorities

Artificial Intelligence (AI), once seen as a way to usher in a new age of technological advancement, is now being used by cybercriminals to propagate disinformation and perpetrate fraud. Fraudsters are now using widely available AI voice cloning tools to steal from people by impersonating family members. The use of these tools has become more prevalent and alarming in recent years. AI voice cloning has become so convincing that it’s almost indistinguishable from human speech. According to Wasim Khaled, the chief executive of Blackbird.AI, “AI voice cloning allows threat actors like scammers to extract information and funds from victims more effectively.”

This alarming reality has resulted in a global survey of 7,000 people from nine countries, including the United States. The results of the survey reveal that one in four people has experienced an AI voice-cloning scam or knows someone who has. The majority of the respondents were not confident that they could determine whether a voice was cloned or not. Criminals are also using AI technology to perpetrate the “grandparent scam.” In this kind of fraudulent scheme, scammers pose as a grandchild in urgent need of money. American officials have already issued warnings about this type of scam because it has successfully duped many unsuspecting victims. Hany Farid, a professor at the UC Berkeley School of Information, warns that almost anyone with an online presence is vulnerable to an attack. “We are going to need new technology to know if the person you think you’re talking to is actually the person you’re talking to,” he said.

This alarming trend has prompted tech experts to urge the development of new technologies, such as advanced authentication tools and voice recognition software, to protect people from these types of attacks. These new tools could potentially determine whether a voice is cloned, providing an additional layer of protection for people who use online communication tools. Experts also advise individuals to be more cautious and vigilant when receiving calls, particularly from people claiming to be family members in need of help. It is also essential to verify the authenticity of the calls or messages before sending any money.

Artificial intelligence’s ability to blur the boundaries between reality and fiction creates a significant risk not only for individuals but also for institutions and organizations that handle sensitive personal information. Cybercriminals can leverage AI to scam or defraud people who are unaware of the technology’s latest trends and advancements.

Explore more