News & Updates

Hyper Realistic Scams: How AI is Revolutionizing Phishing Attacks

Aaron Fleishman, Partner and Akshay Bhushan, Partner

Today’s sophisticated fraud attacks have technology and IT leaders nostalgic for the days of the simple ‘Nigerian prince’ phishing emails. Generative AI is supercharging malicious digital actors. Aided by this technology, attackers are making a huge volume of high-quality, highly effective personalized attacks that are much more likely to succeed. Companies of all sizes are under pressure from these attacks, and the average consumer is falling victim to these hyper realistic scams more than ever before, with fraud losses up 30% in 2022 over 2021.

Cybersecurity teams need types of technology and training to triage the flow of AI-powered security threats they face on a daily basis, and innovative startups are poised to take this challenge head-on.

AI-powered cyber attacks

Before the proliferation of AI tools, it was too costly to create highly targeted and sophisticated phishing attacks at scale. Generative AI completely changed that calculus for attackers, allowing highly personalized and powerful phishing for a much lower cost. This may lead to a tsunami of highly effective attacks targeting even small businesses and individuals.

Previously, compromised text-based email or text attacks were the norm. Someone would receive an email that looks like it came from a high-ranking colleague at their organization, such as the CFO or CEO, requesting that they take a specific action, like paying a large invoice. If that email looked suspicious, they may call up that CFO or CEO to confirm the request, because voice or video communications with that person is more trustworthy than written text.

With the advent of powerful voice models like Tortoise TTS and Resemble AI, attackers can better impersonate the voices of these trusted people and craft more devastating attacks. These hyperrealistic generated deep fake voices combined with the text-based attack can create a situation that can penetrate even the most sophisticated enterprise. It’s been reported that up to almost half of organizations experienced a voice phishing attack in 2021. Even beyond the enterprise use case, these AI-generated voice scams target individuals, often impersonating loved ones in a crisis to extort money in exchange for their safety. As this technology advances, cybersecurity teams need technology and processes for detecting machine-generated content.

How startups are leading the cybersecurity future

As these attacks become more powerful, organizations need equally powerful approaches for prevention. Some of the solution lies in workforce training. Employees need to understand that what they were taught in the past about phishing might not be as effective or efficient with the new technologies. Any seemingly fishy request via text or voice must be confirmed on trusted internal communication channels (think Slack and Microsoft Teams).

With how quickly this technology has developed and deployed, advanced deepfake and phishing detection technology hasn’t quite kept pace, which is scary. New startups are racing to create this technology to help organizations get ahead. This space will be even more important when both the regularity of these attacks and the sophistication of the technology behind them increase. We see an opportunity for startups to emerge and build next-gen products and, eventually, platforms to help enterprises protect against these more advanced generative AI-driven cyberattacks.