Key Takeaways
- Eighty-eight percent of organizations suffered at least one trust-related security incident in the past year
- AI-driven phishing and deepfake-enabled fraud are outpacing legacy defenses and employee training
- Finance teams face the highest pressure as attackers increasingly target monetary workflows
A rising sense of unease is spreading across enterprise security teams, and the latest research from Osterman Research helps explain why. Over the past year, nearly nine in ten organizations experienced a security incident that damaged trust in their digital communications. That’s not a small blip on the radar—it’s a systemic failure brought on by AI-enhanced phishing, business email compromise, and impersonation attacks that look nothing like what defenders were fighting just a few years ago.
Here's the thing: many security leaders believed they had already “solved” phishing. For a long stretch, employees were trained to look for misspellings, suspicious formatting, and awkward language. Security tools scanned for anomalies and blocked known-bad senders. But with generative AI, those signals have essentially evaporated. Anyone can now create flawless, deeply personalized messages in any language, and push them through a mix of email, voice, and video channels all at once.
That combination has reset the playing field. According to the report, 82% of security leaders are seeing increased threat actor interest in exploiting trusted communications, yet 60% don’t feel confident defending against deepfake-driven schemes. It’s an uncomfortable mismatch between attack velocity and defensive readiness.
So what’s changed so dramatically? One major shift is the maturation of multi-channel impersonation. Phishing messages aren’t just landing in inboxes—they’re being reinforced by AI-generated audio calls and, increasingly, video. Oddly enough, respondents believe these attacks are still in their early stages. Only about a quarter think AI-generated phishing, deepfake audio, or deepfake video are anywhere near maturity. If that’s true, the current wave may be only the beginning.
Meanwhile, the people under the most pressure—finance teams—are also the least trusted by security leaders to spot these threats. At first glance, that seems surprising. Finance departments are typically process-driven, cautious, and well-trained in fraud prevention. But attackers go where the money flows, and 59% of organizations say finance is now their highest-priority target. Hyper-personalized BEC scams and vendor impersonation attacks are hitting these teams daily. More than one-third of organizations saw successful vendor impersonation incidents in the past year alone, with notable year-over-year growth.
One might ask: is employee training simply no longer enough? The research suggests many teams think so. Nearly one in five security leaders reports that traditional awareness training is failing to prepare staff for AI-enhanced threats, especially when deepfake audio or video is involved. Detection effectiveness ratings tell the story—38% for fake audio, 39% for fake video, and 43% for AI-crafted phishing. Those numbers underscore something defenders have been whispering for years: the human eye can’t reliably keep pace with machine-generated deception.
On the technology side, the situation is equally strained. Legacy email security systems weren't built to detect the subtle signals that AI models can manipulate or eliminate altogether. These older platforms tend to rely on static rules, known-bad signatures, or simplistic anomaly detection. That worked when attackers made mistakes. Now the content is polished, contextually accurate, and sometimes even mirrors internal communication styles. As Michael Sampson of Osterman Research put it, these older tools are simply too blunt for the job.
Organizations seem to recognize this. A strong majority—70%—say detecting deepfake audio impersonation attacks is now “extremely important,” marking the steepest rise in priority across the board. And the willingness to make changes is striking. Many enterprises are prepared to add specialized point solutions, switch vendors, or even rip out and replace large portions of their security stack.
One example of the kind of vendor referenced in the study is IRONSCALES, whose AI-assisted approach is positioned to help support better detection and remediation of modern phishing and impersonation attacks. Solutions in this category often blend automated analysis with human reporting loops, improving responsiveness without relying solely on employee vigilance.
But even with modern tools, the broader challenge persists: trust itself is under attack. When employees can no longer rely on familiar cues to judge authenticity, daily operations are disrupted. Productivity drops. Customer communications slow down. And according to 55% of security leaders, the likelihood of an actual data breach rises significantly when trust is compromised.
It’s worth noting that the research sample focused on mid-sized U.S. organizations—those with 1,000 to 5,000 employees. These companies are often large enough to face enterprise-grade threats but not always large enough to maintain specialized detection teams or advanced fraud prevention workflows. In other words, they're squarely in the blast radius of AI-driven social engineering.
Interestingly, the report doesn’t suggest that defenders are powerless. Instead, it highlights a transitional moment. As threat actors experiment with autonomous attack generation, multi-channel deception, and identity spoofing at scale, organizations are being forced to rethink trusted communication from the ground up. It’s not simply a technical problem; it's a cultural and procedural one.
Where does that leave enterprises? Somewhere between alarmed and determined. The threat curve may have reset, but so has willingness to adapt. The next 12 to 18 months will likely reveal whether organizations can rebuild trust in digital communication before attackers’ AI tools reach full maturity.
⬇️