In August, Patrick Hillman, director of communications for the Binance blockchain ecosystem, knew something was wrong when he scoured his full inbox and found six messages from clients regarding recent video calls with investors he was talking to. would have participated. “Thank you for the investment opportunity,” said one. “I have some concerns about your investment advice,” wrote another. Others complained that the quality of the video wasn’t great, and one even asked outright, “Can you confirm that the Zoom call we had on Thursday was really you? “
With a sinking feeling in his stomach, Hillman realized that someone had tampered with his image and voice enough to hold 20-minute ‘investment’ Zoom calls trying to convince his company’s clients to return. their Bitcoin for fraudulent investments. “Clients I was able to connect with shared links with me to fake LinkedIn and Telegram profiles pretending to be me, inviting them to various meetings to talk about different sign-up opportunities. Then the criminals used a hologram convincing of me in Zoom calls to try to scam several representatives of legitimate cryptocurrency projects,” he says.
As the world’s largest crypto exchange with $25 billion in volume at the time of this writing, Binance deals with its share of fake investment frauds that attempt to capitalize on its brand and steal crypto from people. people. “It was a first for us,” Hillman says. “I see it as a harbinger of what we believe will be the future of AI-generated deepfakes used in commercial scams, but it’s already here.”
The scam is so new that if it weren’t for shrewd investors spotting quirks and latency in videos, Hillman might never have heard of these fake video calls, despite the company’s large investments. company in talent and security technologies.
Deepfake as a service
As AI-generated deepfakes are becoming easier to produce, they are already being used for employees trained in social engineering and to circumvent security controls. The misuse of deepfakes to commit fraud, extortion, scams, and child exploitation poses enough of a risk to businesses and the public that the Department of Homeland Security (DHS) recently released a 40-page report. on deepfakes. It details how deepfakes are created by composites of images and voices extracted from online sources and also provides opportunities to mitigate deepfakes at the intent, research, creation and dissemination stages of an attack.
“We already see deepfakes as a service on the dark web, just as we see ransomware as a service used in extortion techniques, because deepfakes are incredibly effective at social engineering,” says Derek Manky, social strategist. Chief Security Officer and Vice President of Global Threat. intelligence at Fortinet’s FortiGuard Labs. “For example, leveraging deepfakes is popular in BEC [business email compromise] scams to effectively convince someone to send funds to a fake address, especially if they believe it was an instruction from a CFO.
Executive whaling, BEC scams, and other forms of phishing and farming represent the first phase of the type of attacks on businesses. For example, in 2019, scammers using a deepfake of a company CEO’s voice marked as urgent convinced a division head to wire $243,000 to a “Hungarian supplier.” But many experts see deepfakes as part of future malware packages, including in ransomware and biometric subversion.
Retooling needed to spot deepfakes
In addition to convincing corporate executives to send money, deepfakes also present unique challenges for voice authentication frequently used by banks today, as well as other biometrics, says Lou Steinberg, former CTO of Ameritrade. After Ameritrade, Steinberg went on to found cyber-research lab CTM Insights, to tackle issues such as data integrity weaknesses that allow deepfakes to circumvent security controls. He realized that biometrics is just another form of data that criminals can manipulate after a demonstration with Israeli researchers.
“We’ve seen Israeli researchers replace images in a CT scan to hide or add cancer in the scanned images, and we realized this could be used in ransomware situations when the bad guys say, ‘We’ll only show you the real results of your real CT scan if you pay us X dollars,” says Steinberg. As such, he says, there needs to be more focus on data integrity. “Deepfakes are AI-generated, and traditional signature technology can’t keep up because all it takes is a small edit to the image to change the signature.”
Knowing that traditional security controls will not protect consumers and businesses against deepfakes, Adobe launched the Content Authenticity Initiative (CAI) to solve the problem of content integrity in image and audio until the developer level. CAI members have written open standards for developing manifests at the point of image capture (eg from the digital camera taking the picture) so that viewers and security tools can verify the authenticity of ‘a picture. The initiative has more than 700 supporting companies, many of which are media providers including USA Today, Gannett News, Getty Images, as well as image providers and imaging product companies such as Nikon.
“The issue of deepfakes is big enough that Adobe’s CEO is pushing for authentication of the content behind image and audio files. It’s an example of how protecting against deepfakes will require a whole new set of countermeasures. -metrics and context, including deep learning, AI, and other techniques to decipher whether something is real or not,” says Brian Reed, a former Gartner analyst who is now an advisor at Lionfish Technology Advisors He also points to the Deep Fakes Passport Act introduced as Senate Bill HR 5532, which seeks to fund deepfake competitions to help ease controls against them.
Steinberg suggests taking inspiration from the financial industry, where fraud detection is starting to focus more on what a person is asking a system to do rather than just trying to prove who the person on the other end is. of the transaction request. “We’re over-focused on authentication and under-focused on authorization, which comes down to intent,” he explains. “If you are not authorized to transfer millions to an unknown entity in a third world, that transaction should be automatically rejected and reported, with or without the use of biometric authentication.”
Fake biometric authentication
Proving the “who” in a transaction is also problematic if attackers turn deepfakes against biometric checks, he continues. Biometric images and hashes, he says, are also data that can be manipulated with AI-based deepfake technology that can match characteristics by which biometric scanners authenticate themselves, such as dots on a face and a irises, or loops on a fingerprint. Using AI to identify AI-generated images is a start, but most matching technologies aren’t granular enough, or they’re so granular that scanning a single image is expensive.
Brand protection firm Allure Security scales CTM’s AI-powered micro-matching technology to identify changes from its database of tens of thousands of original brand images and scan 100 million pages daily,” says Josh Shaul, CEO of Allure. “To identify deepfakes designed to circumvent analysis and detection, we use AI against AI,” he explains. “We can develop the same technology to detect fake images, profile photos, online videos and Web3 spots. For example, we recently looked at an impersonation in a Metaverse land buy opportunity.
Hillman also urges companies to update their training and awareness, both internally for employees and executives and externally for customers. “The idea of whether deepfakes are going to be a problem is no longer a matter of if but when and I don’t think companies have a playbook on how to defend against deepfake attacks,” he predicts. “Use your outreach channels to educate. Perform external audits on leaders to see who has content that makes them vulnerable. Audit your controls. And prepare for crisis management.
Copyright © 2022 IDG Communications, Inc.