An impersonator successfully deceived U.S. and foreign officials by utilizing artificial intelligence (AI) to pose as U.S. Secretary of State, Marco Rubio. This elaborate scheme involved cloning his voice and mimicking his writing style, aiming to access sensitive information. Although authorities have yet to identify the perpetrator, the seriousness of this incident has raised significant concerns within the State Department.
Advanced Impersonation Tactics: AI-Generated Voice and Tailored Messages
According to an internal State Department memo, dated July 3 and acquired by The Washington Post, the impersonator contacted at least five individuals outside the Department. Among the targets were three foreign ministers from unidentified countries, a U.S. governor, and a member of Congress. The impersonator used the encrypted messaging app Signal, favored by government officials for its security features, particularly since the Trump administration.
The alarming aspect of this case is the application of AI tools to replicate both Rubio's voice and writing style. "The individual left voice messages on Signal for at least two targets and, on one occasion, sent a text message inviting them to communicate via Signal," the memo stated. The impersonator created a fraudulent account under the username Marco.Rubio@state.gov, which is not an official government address.
Motivation: Accessing Confidential Information
While the exact details of the information sought by the impersonator remain unknown, authorities suspect the primary motivation was to gain access to sensitive accounts or data. This manipulation attempt was a meticulously planned campaign that began in mid-June, as reported by State Department sources to The Post. The memo also warns that other State Department officials were impersonated via fraudulent emails, indicating a broader, organized operation with potential national security implications.
Accessible Technology, Increasing Risks
Despite not knowing the identity of the perpetrator, experts agree that the attack did not require exceptional technical skills. "You only need 15 to 20 seconds of someone's audio, which is easy in Marco Rubio's case. You upload it to a service, click a button saying 'I have permission to use this person's voice,' and then type what you want them to say," explained Hany Farid, a digital forensics expert from the University of California, Berkeley. "Leaving voice messages is particularly effective because it’s non-interactive," he added.
The use of AI to create fake voices is becoming a widespread issue, raising alarms among security agencies globally. The ease with which a public figure's identity can be replicated through such tools presents a significant challenge to authenticity in government communications.
Institutional Response and Legal Gaps
The State Department has announced a thorough investigation. "The Department takes its responsibility to protect its information very seriously and continues to enhance its cybersecurity measures to prevent future incidents," a senior official told The Independent. However, the contents of the intercepted messages and the names of the involved officials have not been disclosed.
The diplomatic memo also advises State Department employees and external officials to report any impersonation attempts to the FBI and the Diplomatic Security Service. In the U.S., impersonating a government official for deceptive purposes is a federal crime, carrying a penalty of up to three years in prison.
Congressional Reactions: A Call for Legislative Action
Cuban-American Congresswoman María Elvira Salazar responded strongly to the incident, renewing her call for regulation of such practices. "That's why we need the NO FAKES Act now. It would establish the first federal protection for your voice and image. Let's do it and defend all Americans from exploitation," she posted on social media platform X.
The NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), currently under discussion, aims to create a legal framework for protecting individuals' images and voices from unauthorized use by technologies like AI.
Structural Cybersecurity Flaws in U.S. State Communications?
The use of Signal for official discussions has faced heavy criticism. Despite its end-to-end encryption, several recent incidents have exposed its vulnerabilities. One notable episode was the "Signalgate" in March, when a journalist was mistakenly added to a Signal group where senior officials, including Rubio in his role as acting national security advisor, discussed military plans for Yemen.
The repetition of such incidents prompted the Department of Defense to ban the use of Signal, WhatsApp, and iMessage for non-public official matters as of 2023.
A Disturbing Pattern
This is not the only recent case of impersonation shaking the Trump administration. In May, the phone of Susie Wiles, White House Chief of Staff, was compromised. The impersonator accessed her contact list and posed as her to communicate with senators, governors, and executives. The White House and FBI launched an investigation, although President Trump downplayed the matter: "No one can impersonate Susie. There's only one Susie."
The FBI reports that such campaigns, leveraging AI-generated voices, are part of a growing pattern aimed at stealing information or funds. In May, the agency warned: "If you receive a message claiming to be from a senior U.S. official, do not assume it is authentic."
Similar cases have been reported in other countries. In June, Ukraine's Security Service reported Russian agents posing as authorities to recruit civilians for sabotage missions. Canada also reported similar fraudulent call campaigns using synthetic voices of high-ranking officials to steal sensitive information or embed malware in networks.
As of this article's publication, Secretary of State Marco Rubio has not commented on the leak reported by The Post.
Understanding AI Impersonation Threats
How did the impersonator clone Marco Rubio's voice?
The impersonator used AI tools that require only a short audio sample to replicate someone's voice. This technology is easily accessible and was used to mimic Rubio's voice for deceptive purposes.
What are the potential consequences of AI impersonation?
AI impersonation can lead to unauthorized access to sensitive information, posing significant national security risks. It also highlights vulnerabilities in communication systems and the need for stronger cybersecurity measures.
What steps is the State Department taking in response?
The State Department is conducting a thorough investigation and implementing enhanced cybersecurity protocols to prevent future incidents of impersonation and data breaches.
How does the NO FAKES Act aim to address AI impersonation?
The NO FAKES Act seeks to establish legal protections for individuals' voices and images, preventing unauthorized use by AI technologies and safeguarding personal identity from exploitation.