Beware! Global firm almost lost $500K to deepfake AI scam

The scammers used audio and video deepfakes along with fake legal documents to orchestrate the heist.
It’s a case that reads like a cybercrime thriller: a finance director at a multinational corporation narrowly avoided losing US$499,000 in a business impersonation scam powered by deepfake technology. Thanks to swift action and international cooperation, the money was successfully traced and withheld before it vanished into the digital ether.
The incident, which occurred in late March, highlights the rising threat of AI-driven deception in the corporate world, where scams are no longer clumsy phishing emails but sophisticated simulations of senior leadership – right down to their voices and video likenesses.
Deepfakes are AI-generated media – such as videos, images, or audio – that convincingly mimic real people. They are created using techniques like Generative Adversarial Networks or GANs, which refine fake content by making it indistinguishable from authentic material.
For example, videos may replace one person’s face with another’s, making them appear as though the person said or did something they never did.
Audio deepfakes, meanwhile, replicate a person’s voice to fabricate conversations or verbal approvals.
Anatomy of a deepfake scam
The scam began innocuously enough. On 24 March, the finance director received a WhatsApp message from someone posing as the company’s chief financial officer. The scammer claimed there would be a video conference in two days’ time to discuss a regional business restructuring and advised the director to speak with a legal partner involved in the matter.
Soon after, the director received a call from a so-called lawyer, who stressed the confidentiality of the project and had him sign a non-disclosure agreement – a classic move to lend legitimacy and silence scepticism.
Then came a sudden change: the Zoom call was brought forward to the very next day, 25 March.
On the call were individuals the director believed to be the company’s chief executive officer and other senior executives. In reality, these were deepfake avatars – meticulously engineered digital doppelgängers. The scam was reinforced with further exchanges with the fake lawyer, who instructed the director to transfer over $499,000 from the company’s HSBC account to a local corporate bank account.
The transaction was completed on 26 March. Unknown to the victim, the recipient account was a money mule account, set up to launder stolen funds. By the time he grew suspicious – when the scammers had the audacity to request another $1.4 million – it was already 27 March.
Read: The dangers of deepfake AI
Damage control and recovery
Reacting promptly, the finance director contacted HSBC, which flagged the case to the Singapore Police Force’s Anti-Scam Centre. The ASC, in turn, engaged its Hong Kong counterpart, the Anti-Deception Coordination Centre, a fellow member of the regional FRONTIER+ initiative. Together, they tracked down the majority of the siphoned funds – over $494,000 – which had already been rerouted to bank accounts in Hong Kong.
By 28 March, just four days after the scam began, the authorities had successfully frozen the overseas accounts. The ASC also secured the remaining $5,000 still sitting in the local mule account. An active investigation is now under way.
This near-miss would have been a devastating loss for any business, and it underscores just how quickly trust can be manipulated when AI-driven tools are weaponised. The use of deepfake technology to convincingly impersonate executives marks a chilling evolution in scam sophistication.
A rising tide of fraud in Singapore
Unfortunately, this case is not an isolated incident – it is part of a widescale pattern. Between 2023 and 2024, Singapore saw a staggering 207% increase in identity fraud cases, the highest year-on-year rise in the Asia-Pacific region. Deepfake-related fraud in particular surged by 240%, placing Singapore among the region’s most affected nations.
More than half (56%) of businesses in Singapore reported encountering audio-based deepfake scams, where artificial intelligence is used to mimic the voices of senior executives to authorise fake transactions or request sensitive information. The voice may be familiar, the tone reassuring – but it’s all smoke and mirrors.
Public figures, too, have become prime targets. In April 2024, several Members of Parliament, including Foreign Affairs Minister Vivian Balakrishnan, received blackmail letters containing manipulated, obscene images. Even former Prime Minister Lee Hsien Loong was not spared: his likeness was used in a deepfake video falsely promoting a cryptocurrency platform, prompting a swift rebuttal from the Prime Minister’s Office.
Scams in the age of simulations
Scammers are exploiting not just human trust, but the power of generative AI. Live Zoom calls are now a vehicle for impersonation. Deepfake videos and voice clones are used to replicate executives with uncanny accuracy. Legal documents like board memos and NDAs are fabricated to add an air of legitimacy. The result? A convincing mirage of authority that can lead companies to part with millions.
The playbook, however, is becoming predictable. There’s usually an urgent business reason – a restructuring, a new investment, or a legal matter. The employee is isolated through confidentiality clauses. Pressure is applied swiftly. The money changes hands. And by the time reality dawns, it may already be too late.
Read: Cybersecurity: Your workforce, your firewall
Strengthening defences against deepfakes
Given the rising threat, the Singapore Police Force and cybersecurity experts have issued urgent recommendations to help businesses shore up their defences:
- Establish clear protocols to verify the identity of senior leadership on video or voice calls, especially when large financial transactions are involved.
- Cross-check urgent instructions through secure, alternative communication channels. Don’t rely solely on WhatsApp or video conferencing tools.
- Train staff to spot red flags such as unexpected requests, high confidentiality demands, or unfamiliar legal contacts.
- Implement multi-factor authentication and adopt real-time deepfake detection tools where possible.
- Conduct regular awareness briefings on the latest scam tactics, including audio and visual manipulation techniques.
- If a scam is suspected, immediately notify the company’s bank and file a police report to contain the damage.
In Singapore, the ScamShield Helpline (1799) is available 24/7 to assist those who believe they may have fallen victim.
The incident involving the global firm serves as a stark reminder that in the age of AI, not everything we see – or hear – can be trusted. The old adage “trust but verify” needs a digital upgrade.
For business and HR leaders alike, this is not just an IT problem or a finance risk – it’s a boardroom issue that affects brand trust, employee safety, and operational resilience.
In short, staying ahead of deepfake scams means adopting a mindset of digital vigilance. Because in a world where faces and voices can be faked with frightening ease, only a well-trained eye and a cautious hand can separate fact from fiction.