Shilton Discusses AI-Powered Phone Scams Targeting Seniors
Phone scammers are now using artificial intelligence to steal money from victims with realistic-sounding facsimiles of loved ones.
Scammers take existing recordings uploaded to social media and use AI to create a dynamic version of the voice that can read a script, and is then used to trick others out of cash. Seniors are often targeted, and when AI voice cloning is coupled with phone "spoofing," which falsifies the caller ID to appear as a familiar and trusted phone number, these calls become very convincing frauds.
Katie Shilton, associate professor of information science at the University of Maryland, said raising awareness is currently the best defense.
"People should know that this is a growing kind of crime and should be a little bit suspicious of frantic, threatening phone calls," Shilton advised. "One of the best countermeasures right now is to try to call the person back on their number."
Phone spoofing can also be used to mimic the phone number of a government agency or reputable organization. The Federal Trade Commission reports scammers may use an intermediary posing as an authority figure such as a fake lawyer or police officer.
Scammers will often ask victims to pay or send money in ways making it difficult to recover, including wire transfers, buying gift cards and sending them the number and PIN, or cryptocurrency. If you encounter a scam, you can report it to the FTC at ReportFraud.ftc.gov.
Shilton said the AI enabling scammers was originally developed for beneficial purposes.
"AI-powered phone scams are powered by a form of AI development that was meant for prosocial purposes," Shilton explained. "Originally, the voice mimicking was for art or for film; a lot of this work has been about accessibility to create voice assistants. Some of this work was to create voice assistants for business purposes."
The National Science Foundation has established an institute for Trustworthy A.I. in Law and Society. The institute is a partnership between the University of Maryland, George Washington University and Morgan State University, and seeks to develop mechanisms to ensure AI trustworthiness via both technological and public policy responses.
Shilton pointed out one area of innovation for researchers is the concept of watermarking AI output.
"Watermarking is a really promising area of research for generative AI in general, including voice mimicking technologies," Shilton emphasized. "The idea that we should have some sort of way for people to tell when something has been generated by AI as opposed to naturalistic recordings of people or something like that. "
Shilton noted one approach the institute is using to improve public trust in AI is including stakeholder communities in the design process.
"We have participatory design projects with the teachers union in Baltimore, to talk about tools for the classroom," Shilton said. "These are frequently designed outside of classrooms. Could we design them with teachers and parents and teenagers? Or we have accessibility design projects with blind communities to do object recognition."
This story was published by Public News Service.