AI deepfakes raise global alarm, from politics to cybersecurity threats
Written July 28, 2025 – 16:45 EDT
The rise of realistic deepfakes — synthetic audio and video generated using artificial intelligence — is challenging the foundations of trust in government, business, and everyday digital communication. With tools more accessible than ever, bad actors are using deepfakes to impersonate public officials, deceive voters, and penetrate corporate networks, prompting experts to call for urgent countermeasures.
From the halls of Washington to the boardrooms of global finance, AI-generated deception is becoming a real-world threat. Combating it may require a multi-layered approach involving regulation, public awareness, and AI-powered detection tools.
Deepfakes impersonate officials, targeting national security
This summer, an alarming incident exposed just how realistic and dangerous deepfakes can be. Someone used AI to impersonate Secretary of State Marco Rubio, contacting foreign officials through voicemails, texts, and the encrypted messaging app Signal.
In a separate case, Trump’s then–chief of staff, Susie Wiles, was also mimicked by AI. Earlier in the year, another deepfake video depicted Rubio threatening to cut off Ukraine’s access to Elon Musk’s Starlink satellite service — a claim later refuted by Ukraine’s government.
Cybersecurity experts warn that such impersonations are not just technical pranks. They pose real threats by creating confusion and potentially leaking sensitive diplomatic or military information.
“You’re either trying to extract sensitive secrets or competitive information or you’re going after access — to an email server or other sensitive network,” said Kinny Chan, CEO of cybersecurity firm QiD.
These attacks are part of a growing pattern where synthetic media is used by foreign adversaries — including Russia, China, and North Korea — to undermine trust in democratic institutions and disrupt international cooperation.
AI-generated disinformation enters U.S. elections
AI deepfakes are also beginning to influence domestic politics. In one notable case last year, Democratic voters in New Hampshire received robocalls mimicking President Joe Biden’s voice, urging them not to vote in the state’s primary. The audio was generated using AI voice cloning.
Political consultant Steven Kramer later admitted to creating and distributing the calls to highlight the dangers of deepfake technology. Although he was acquitted of criminal charges, the incident served as a stark warning of how easily voters can be misled by synthetic media.
“I did what I did for $500,” Kramer said in court. “Can you imagine what would happen if the Chinese government decided to do this?”
The case underlines a critical point: deepfakes don’t just pose a technological challenge — they represent a broader threat to civic trust and democratic systems.
Financial industry under attack from deepfake scams
While governments face deepfake impersonations at the diplomatic level, businesses — particularly in the financial sector — are being targeted for fraud and cyber intrusion.
“The financial industry is right in the crosshairs,” said Jennifer Ewbank, a former CIA deputy director focused on digital threats. “Even individuals who know each other have been convinced to transfer vast sums of money.”
In one common scheme, criminals use deepfakes to impersonate company executives. Employees may receive fake video calls or emails from what appears to be their CEO, requesting sensitive financial information or password access. Some schemes have successfully convinced employees to transfer large sums of money or grant backdoor access to corporate networks.
The threat doesn’t stop there. Deepfakes are now being used to apply for — and even hold — remote jobs under fake identities. In these cases, attackers may gain access to internal systems and later install ransomware or steal proprietary data.
North Korea reportedly behind deepfake job schemes
U.S. authorities have raised concerns about North Korea’s growing use of deepfakes in cyber operations. According to intelligence reports, thousands of North Korean IT workers have been dispatched abroad using stolen identities to apply for jobs at foreign tech companies.
These operatives reportedly use deepfakes to pass job interviews and conceal their true identities, gaining access to sensitive data and critical networks. In many cases, they also generate income for the North Korean regime — and in some instances, they install ransomware to be activated later.
The schemes have reportedly generated billions of dollars for Pyongyang, fueling its weapons development and defying international sanctions.
Cybersecurity company Adaptive Security estimates that by 2027, 1 in 4 job applications may involve some form of synthetic identity or deepfake manipulation.
“We’ve entered an era where anyone with a laptop and access to an open-source model can convincingly impersonate a real person,” said Adaptive CEO Brian Long. “It’s no longer about hacking systems — it’s about hacking trust.”
Fighting deepfakes with smarter technology and policy
Recognizing the growing danger, public and private sectors are developing countermeasures. These include:
- AI-powered detection tools that analyze speech and video patterns to spot deepfakes
- Regulatory proposals requiring tech platforms to detect and label synthetic content
- Public education campaigns focused on media literacy and online deception
One such detection system, developed by Pindrop Security, analyzes millions of datapoints from a person’s voice during real-time conversations to detect irregularities that suggest voice cloning. These tools are already being used in hiring processes and financial transactions.
“You can take the defeatist view and say we’re going to be subservient to disinformation,” said Vijay Balasubramaniyan, CEO of Pindrop. “But that’s not going to happen.”
Experts compare this fight to earlier battles against email spam — once thought unmanageable, now largely mitigated through filters and authentication protocols.
Future of trust in the age of synthetic media
The growing sophistication and accessibility of AI tools make deepfakes a lasting concern for governments, corporations, and the public alike. While technological tools offer hope, they must be paired with updated laws and global cooperation to address the cross-border nature of these threats.
The digital age has introduced a new currency — trust — and deepfakes are eroding its value. How societies respond in the next few years may determine whether fact or fiction governs the global narrative.
Source: AP News – Creating realistic deepfakes is getting easier than ever. Fighting back may take even more AI