Washington (AP) – The phone rings. The Secretary of State calls. Or is it?
For insiders in Washington no longer believe in seeing and hearing. Deepfake He impersonated President Donald Trump’s top official in his administration.
Just as crime gangs and hackers are associated, corporate America’s digital fakes are coming too Enemies including North Korea Using synthetic video and audio, you can impersonate CEOs and low-level job seekers to access critical systems and business secrets.
Thanks to advances in artificial intelligence, creating realistic deepfakes is easier than ever, creating security issues for governments, businesses and individuals; trust The most valuable currency of the digital age.
To meet the challenges, we need more AI and laws to combat AI, better digital literacy and technical solutions.
“As humans, we are very susceptible to the effects of deception,” said Vijaybarasbramaniyan, CEO and founder of Pindrop Security, a high-tech company. But he believes that a solution to the deepfake challenge might be within reach: “We’re going to fight back.”
AI deepfakes pose national security threats
Someone created this summer using AI The depth of Secretary of State Marco Rubio US senators and governors over text, voicemail and signal messaging apps to reach out to foreign ministers.
Someone pretended to be Trump’s Chief of Staff in May. Susie Wills.
Another fake Rubio came out on Deepfaak earlier this year and said he wanted to cut off access to Ukraine’s Elon Musk’s Starlink Internet Service. The Ukrainian government later rebutted the false claims.
The meaning of national security is huge. For example, those who think they are chatting with Rubio or Will may discuss sensitive information about diplomatic negotiations and military strategy.
Kinny Chan, CEO of cybersecurity company QID, said of the possible motivation: “You’re trying to extract sensitive or competitive information, you’re trying to chase access, you’re making access before you get access.”
Synthetic media also aims to change behavior. Last year, Democrat voters in New Hampshire received it. Robocall urging people not to vote in the state’s upcoming primary election. Cole’s voice sounded suspicious like then President Joe Biden, but it was actually created using AI.
The ability to deceive causes AI to deepfake powerful weapons for foreign actors. Both Russia and China have it Used disinformation and propaganda It was directed at Americans as a way to undermine trust in democratic alliances and institutions.
Political consultant Steven Kramer, who admitted to sending fake Biden Robocalls, said he wanted to send a message to the American political system that deepfakes would pose. That’s what Kramer was like. He was acquitted last month of accusations of voter suppression and impersonation of candidates.
“I did what I did for $500,” Kramer said. “Can you imagine what would happen if the Chinese government decided to do this?”
Scammers are targeting the financial industry with deepfakes
Increased availability and refinement of the program means deepfakes are increasingly being used for corporate espionage and garden diversity scams.
“The financial industry is cross-sectional,” said Jennifer Eubank, former deputy director of the CIA, who tackled cybersecurity and digital threats. “Even people who know each other are sure they will transfer huge sums of money.”
In the context of a corporate espionage, employees can be asked to impersonate CEOs who ask them to hand over their passwords and routing numbers.
Deepfakes allow fraudsters to apply for jobs and even do them under the supposed or false identity of the con artist. In some cases, this is how to access a sensitive network, steal secrets, or install ransomware. Others may just want a job, and at the same time they do some similar jobs in different companies.
US authorities said so Thousands of North Koreans Information Technology Skills are dispatched to live abroad to get jobs at high-tech companies in the US and elsewhere using stolen identity. Workers have access to the company’s network and pay. In some cases, workers can install ransomware and use it to force more money later.
The scheme has generated billions of dollars For the North Korean government.
Research by cybersecurity firm Adaptive Security predicts that within three years, one in four job applications will be fake.
“We’ve entered an era where anyone with access to laptops and open source models is convincingly impersonating the real people,” said Brian Long, CEO of Adaptive. “It’s no longer about hacking the system. It’s about hacking trust.”
Experts deploy AI to counterattack AI
Researchers, public policy experts and technology companies are currently investigating the best ways to address the economic, political and social challenges posed by deepfakes.
The new regulations require tech companies to do more to identify, label and potentially remove deepfakes on their platforms. Lawmakers can also impose greater penalties if they can be caught using digital technology to deceive others.
Bigger investment Digital literacy Can be done again Boosts people’s immunity Getting into an online deception by teaching them how to find fake media and avoid being prey to con artists.
The best tool for catching AI could be another AI program. One is trained to smell the small flaws of deepfakes that will be noticed by people.
Systems like Pindrop analyze millions of data points in speeches from everyone and quickly identify irregularities. This system can be used during job interviews and other video conferences to detect, for example, audio cloning software.
Similar programs may be common one day and may run in the background as people chat online with colleagues and loved ones. One day, deepfakes may go down the path of Spam’s email, a technical challenge that once threatened to overturn the usefulness of email, said Balasubramaniyan, CEO of Pindrop.
“You can take the defeatist view and say we will be subordinate to disinformation,” he said. “But that doesn’t happen.”