London
CNN
—
“Thousands and thousands” of individuals might fall sufferer to scams utilizing artificial intelligence to clone their voices, a UK financial institution has warned.
Starling Financial institution, an online-only lender, stated fraudsters are able to utilizing AI to copy an individual’s voice from simply three seconds of audio present in, for instance, a video the particular person has posted on-line. Scammers can then establish the particular person’s family and friends members and use the AI-cloned voice to stage a telephone name to ask for cash.
Most of these scams have the potential to “catch tens of millions out,” Starling Financial institution stated in a press launch Wednesday.
They’ve already affected a whole bunch. In line with a survey of greater than 3,000 adults that the financial institution carried out with Mortar Analysis final month, greater than 1 / 4 of respondents stated they’ve been focused by an AI voice-cloning rip-off prior to now 12 months.
The survey additionally confirmed that 46% of respondents weren’t conscious that such scams existed, and that 8% would ship over as a lot cash as requested by a good friend or member of the family, even when they thought the decision appeared unusual.
“Folks repeatedly publish content material on-line which has recordings of their voice, with out ever imagining it’s making them extra susceptible to fraudsters,” Lisa Grahame, chief info safety officer at Starling Financial institution, stated within the press launch.
The financial institution is encouraging individuals to agree a “protected phrase” with their family members — a easy, random phrase that’s straightforward to recollect and totally different from their different passwords — that can be utilized to confirm their id over the telephone.
The lender advises in opposition to sharing the protected phrase over textual content, which might make it simpler for scammers to seek out out, however, if shared on this method, the message needs to be deleted as soon as the opposite particular person has seen it.
As AI turns into increasingly adept at mimicking human voices, issues are mounting about its potential to hurt individuals by, for instance, serving to criminals entry their financial institution accounts, and unfold misinformation.
Earlier this yr, OpenAI, the maker of generative AI chatbot ChatGPT, unveiled its voice replication tool, Voice Engine, however didn’t make it out there to the general public at that stage, citing the “potential for artificial voice misuse.”