Deepfakes, Fraud, and Identity: What CISOs Are Saying Behind Closed Doors
At a recent roundtable of CISOs, one topic dominated the conversation: the growing risks posed by deepfakes, social engineering, and fraudulent hiring practices. What was once considered cutting-edge, almost academic, has now become an operational reality for many organizations. The discussion highlighted both the urgency of the threat and the creative steps some leaders are taking to stay ahead of it.
Deepfakes Enter the Enterprise
For years, deepfakes were regarded as a consumer or political problem, something that could sway elections, fuel misinformation campaigns, or create damaging personal videos. But several CISOs at the table made it clear: these technologies are now showing up inside corporate environments.
One leader described how their IT team had been deliberately tested with deepfakes to evaluate response readiness. The videos and audio clips were convincing enough to trick unsuspecting staff into granting access or resetting credentials. Another shared that attackers had impersonated executives on video calls, requesting urgent financial transfers or access to sensitive files.
These incidents are no longer isolated. With freely available tools able to replicate a voice with less than a minute of audio, the barrier to entry is vanishing. What used to require a nation-state capability is now within reach of a determined criminal. CISOs agreed that current verification processes, often based on recognizing a familiar voice or trusting a video feed, are dangerously inadequate.
The most reliable defense, several argued, remains low-tech: introducing unexpected, unscripted challenges such as asking the person to move to a different room, stand up, or perform a quick gesture. These “liveness checks” may feel awkward, but they expose the limits of synthetic media in ways that a passive verification process cannot…for now.
Help Desks Under Pressure
Help desks and IT support centers emerged as one of the most vulnerable areas. Password resets, remote access requests, and emergency privilege escalations are exactly the kinds of tasks that attackers target, and where identity verification has historically been weakest.
Traditional methods such as answering knowledge-based questions (“What was your first car?”) or confirming a mother’s maiden name are increasingly ineffective. Not only can this data often be found online, but generative AI makes it easy for attackers to simulate the style and tone of an employee asking for help.
One participant put it bluntly: “What got us here won’t get us there.” The tools we relied on for the past decade, voice recognition, personal trivia, or security tokens alone, are no longer strong enough. Several organizations are experimenting with new approaches, from one-time passcodes tied to mobile devices, to secret keywords or PINs that employees must share during resets.
But even these solutions present challenges. At times employees struggle to remember additional codes, and attackers are quick to exploit confusion or urgency. The consensus was that organizations need layered controls: combining automated technical checks with human intuition, supported by consistent training and clear escalation paths when something feels “off.”
Smishing, Vishing, and CEO Fraud
Despite the hype around deepfakes, CISOs agreed that more traditional social engineering, particularly smishing (SMS phishing) and vishing (voice phishing), continue to be damaging threats.
I shared this example: within 60 days of our new CEO starting, attackers began impersonating him in text messages, urging staff to click on malicious links or approve transactions. The attackers were monitoring corporate announcements and LinkedIn updates in real time, tailoring their scams to the company’s leadership changes.
The group noted that these attacks are especially dangerous because they exploit trust in leadership and the urgency of executive requests. Employees, wanting to be responsive, often act before verifying. For many organizations, the only defense is relentless awareness campaigns, teaching employees to pause, verify, and never act on urgent requests without a second confirmation channel.
Education and Awareness
Awareness training has long been a staple of security programs, but CISOs stressed that the game has changed. Employees must not only recognize phishing emails but also understand the possibility of voice, video, and live impersonation attempts.
Several leaders shared how they have integrated live demonstrations into training programs. Showing employees how quickly a free tool can generate a fake voice or video of their CEO had a powerful effect. It transformed abstract warnings into concrete, memorable experiences.
Training is also evolving toward empowerment rather than blame. Instead of scolding employees for clicking a link, programs now emphasize curiosity and caution: it’s acceptable to stop a call, ask for additional verification, or escalate concerns to IT. As one participant noted, “We want employees to feel like part of the defense team, not like they’re constantly being tested to fail.”
Fraudulent Hiring and Insider Threats
The roundtable also spotlighted a lesser-discussed but growing issue: fraudulent hiring. Several organizations discovered that individuals hired for remote roles were not who they claimed to be. In some cases, fake identities were supported by falsified documents, deepfake interviews, or stolen credentials.
One leader described uncovering hundreds of fraudulent applicants during a single audit. Others have caught individuals outsourcing their roles to third parties, effectively turning corporate jobs into contract work without approval.
To combat this, CISOs are working closely with HR and legal teams to strengthen background verification. Tactics include requiring live video interviews with interactive components, validating government IDs through scanning tools, and even mandating occasional in-person onboarding for high-risk roles. Some organizations are also investing in insider threat teams tasked with monitoring unusual behavior patterns that may indicate fraudulent activity.
The broader takeaway: identity challenges extend beyond customers and employees to the very process of hiring. As remote work continues, the attack surface of the workforce itself must be secured.
Biometrics, Privacy, and the Future of Identity
The conversation naturally turned toward biometrics and digital identity solutions. Many CISOs see promise in technologies like facial recognition, fingerprint scans, and government ID validation apps that can be embedded into workflows such as help desk resets or onboarding.
But adoption is not straightforward. Privacy regulations vary widely, and employees may resist biometric tracking over fears of surveillance or misuse. Several leaders suggested that the most effective future solutions will balance assurance with minimal data retention, verifying identity without permanently storing sensitive information.
Some are exploring digital wallets that allow employees to prove their identity through a secure, third-party service. Others are looking at liveness detection technologies that can distinguish between a real human and a synthetic clone. While none of these approaches are perfect, CISOs agreed that identity is fast becoming the new perimeter, and investment in this area is essential.
The Road Ahead
The roundtable ended with a sobering but unifying conclusion: cybercriminals are innovating faster than most organizations can adapt. Deepfakes, fraudulent hiring, and advanced social engineering are no longer niche problems—they are mainstream threats that cut across industries.
To keep pace, organizations must rethink not just their tools, but their trust models. Identity assurance, employee education, and proactive detection of fraudulent activity must become as central to security as firewalls and endpoint protection once were.
As one CISO summarized, “We’re entering an era where seeing is no longer believing. The organizations that thrive will be the ones that learn to verify trust at every layer, whether it’s a help desk call, a hiring interview, or a text from the CEO.”
Share this
You May Also Like
These Related Stories

Can AI Stop Cybercriminals from Hiding Behind Cryptocurrency?

Rethinking Identity in the Age of Fraud and AI
