← Back to Blog
Cybersecurity

Artificial Intelligence in the Service of Deception: Deepfakes and Vishing

Isbel
Artificial Intelligence in the Service of Deception: Deepfakes and Vishing

At the intersection of technological innovation and criminal tactics, concerning phenomena such as the use of deepfakes and vishing (voice phishing) have emerged.

The New Era of Social Engineering?

A 2020 report by the AI and Robotics Centre of the United Nations Interregional Crime and Justice Research Institute identified trends in underground forums related to the abuse of artificial intelligence (AI) and machine learning that could gain significant momentum in the near future. Among these trends, the report highlighted human impersonation on social media platforms. Today, that future is already a reality, and everything that was anticipated is occurring and has exceeded expectations.

Both methods are modern versions of impostor scams. These are tactics in which the scammer impersonates someone, typically a person close to the victim, in order to deceive them into providing money.

In 2022, scams of this type were the second most reported fraud category in the United States, with losses amounting to $2.6 billion.

We could consider these scams to be modern applications of social engineering. According to KnowBe4, the world's leading security awareness training and phishing platform, social engineering is defined as "the art of manipulating, influencing, or deceiving the user to take control of their system."

What role does AI play in this scheme? In recent years, cybercriminals have refined their use of AI to build more credible and agile attacks, increasing their chances of generating profit in a shorter period. At the same time, it enables them to target new objectives and develop more innovative criminal approaches while minimizing the probability of detection.

Creating False Realities: Deepfakes and Vishing

The term "deepfake" encapsulates the fusion of two concepts: "deep learning" and "fake." It refers to a sophisticated AI-driven technique that enables the creation of multimedia content that appears authentic but is fabricated.

Using deep learning algorithms, deepfakes can superimpose faces onto videos, alter a person's speech in audio, and even generate realistic images of individuals who never existed.

This concept dates back to the 2010s. One of the first deepfake videos to circulate on the internet was "Face2Face," published in 2016 by a group of researchers from Stanford University who sought to demonstrate the technology's capability to manipulate facial expressions.

Using two resources -- the facial recording of a source actor (a role filled by the researchers) and that of a target actor (presidents such as Vladimir Putin or Donald Trump) -- the researchers successfully reconstructed the facial expressions of the target actors using the expressions of the source actors, in real time and maintaining synchronization between voice and lip movement.

Another highly significant deepfake was a video of former President Obama in which he is heard saying: "We are entering an era where our enemies can make anyone say anything at any point in time." In reality, it was not Obama speaking those words but rather his deepfake.

For its part, "vishing," an abbreviation of "voice phishing," represents an intriguing and dangerous variant of classic phishing. Instead of sending deceptive emails, scammers call by phone to deceive their victims. Using AI-based voice generation software, criminals can replicate the tone, timbre, and resonance of a voice from an audio sample of just 30 seconds -- something they can easily access on social media.

Two Cases That Raised Alarms

Since these early examples, deepfake technology has experienced rapid advancement and widespread adoption in recent years, to the point where it has attracted the attention of the FBI.

In early 2023, the U.S. agency issued an alert after noticing an increase in reports of fabricated adult videos "featuring" victims, created from images or videos that criminals obtained from their social media accounts.

Live Deepfake Through a Video Call

In this context, over the past year, Chinese authorities have intensified surveillance and strengthened enforcement measures following the revelation of an AI-perpetrated fraud.

The incident took place on April 20, 2023, in the city of Baotou, in the Inner Mongolia region. A man surnamed Guo, an executive at a technology company in Fuzhou, Fujian province, received a video call through WeChat -- a highly popular messaging service in China -- from a friend requesting assistance.

The perpetrator used AI-powered face-swapping technology to impersonate Guo's friend. The "friend" mentioned that he was participating in a bidding process in another city and needed to use the company's account to submit a bid of 4.3 million yuan (approximately USD 622,000). During the video call, he promised to make the payment immediately and provided a bank account number for Guo to execute the transfer.

Without suspecting anything, Guo transferred the full amount and then called his real friend to confirm that the transfers had been completed successfully. It was then that he received an unpleasant surprise: his real friend denied having had a video call with him, much less having requested money.

Voices Fabricated with AI

Regarding vishing cases, in 2019 a media outlet reported for the first time on a case of AI-powered voice fraud. The Wall Street Journal published the story of a scam that targeted the British CEO of an energy company for the sum of 220,000 euros.

The perpetrators managed to create a voice so similar to that of the head of the German parent company that none of his colleagues in the United Kingdom could detect the fraud. According to the company's insurance firm, the caller stated that the request was urgent and instructed the CEO to make the payment within one hour. The CEO, hearing the familiar subtle German accent and vocal patterns of his superior, did not suspect anything.

It is presumed that the attackers used commercial voice generation software to carry out the attack. The British executive followed the instructions and transferred the money, which was followed by swift action from the criminals, who moved the funds from the Hungarian account to various locations.

How Can We Improve Our Online Security?

The evolution of technology in recent years challenges the authenticity of images, audio, and video. As a result, it has become essential to strengthen precautions regarding how we communicate remotely, through any modality.

As we have seen, social engineering primarily targets people. For this reason, the main security measure to avoid becoming a victim of this type of attack should focus on user behavior.

When receiving calls or messages with requests that seem unusual, even if they come from close contacts and present a credible story (whether through a frequently used communication channel or not), we must question and exercise caution. A recommended practice is to ask personal questions on the spot that only the real person would be able to answer.

However, users are not the only ones who can be deceived through vishing or deepfakes: facial or voice authentication systems can also be compromised. For several years, the ISO 30107 standard has existed, which establishes principles and methods for evaluating Presentation Attack Detection (PAD) mechanisms -- those designed to detect attempts to falsify biometric data (such as voice or facial features).

Daniel Alano, an information security management specialist at Isbel, emphasized that one way to improve our online security is to use applications certified under ISO 30107. Alano explained that "it is the standard used to measure whether one is vulnerable to identity impersonation attacks," although he cautioned that it is not infallible.

If you would like to explore cybercrime stories further, we invite you to listen to Malicioso, our podcast about the cyberattacks that brought the world to a standstill.

Paysandu 926CP 11100MontevideoTel: +598 2902 1477

© 2025 Isbel S.A., a Quantik® brand

Av. Ana G. Mendez 1399, km 3PR 00926San JuanTel: +1 (787) 775-2100

Carmen C. Balaguer 10El Millon, DNSanto DomingoTel: +1 (809) 412-8672

Follow us

Our Policies