The Voice That Stole Millions -Hong Kong, 2024-
In early February 2024, the Hong Kong police announced a shocking case of fraud involving advanced AI technology and deepfake deception.
The victim was Arup, a British design and engineering firm renowned for iconic projects such as the Sydney Opera House and the Elizabeth line.
According to multiple reports, the scam began with a phishing email to a Hong Kong-based finance employee, which led to a video conference with senior executives including the company’s CFO.
They gave urgent instructions to transfer funds to designated accounts on time, which the worker obeyed.
However, all of their faces and voices that seemed so real turned out to be entirely fabricated using AI-generated deepfakes.
In the end, the employee made 15 separate transfers, which totaled up to HK$200 million.
The fraud was only discovered days later, and Arup declined to share further details, leaving the whole world in shock.
This case is considered one of the most sophisticated deepfake scams.
Until then, criminals could only bring one person to the call, making it a face-to-face interaction with the victim.
But the developing technologies have enabled AI to create and put more than one person on the call, making it more convincing for people, which highlights a chilling reality; even highly trained professionals at reputable firms can fall victim to such fraud.
From a legal perspective, this case raises urgent questions like; "how do existing laws handle evidence when even video and voice can be convincingly faked?".
Personally, I find this case both fascinating and unsettling.
It’s a haunting example of how the technologies we build to enhance communication and efficiency can be weaponized against us.
The law must evolve, not only to punish such crimes, but to understand the risks of the world technology has rewritten.
-Refrences-
Hoi-Ying, L., & Hoi-Ying, L. (2024, May 17). UK multinational Arup confirmed as victim of HK$200 million deepfake scam that used digital version of CFO to dupe Hong Kong employee. South China Morning Post. https://www.scmp.com/news/hong-kong/law-and-crime/article/3263151/uk-multinational-arup-confirmed-victim-hk200-million-deepfake-scam-used-digital-version-cfo-dupe
Deepfake heist: Criminals steal $25M via fake video conference - HSToday. (n.d.). HSToday. https://www.hstoday.us/subject-matter-areas/cybersecurity/deepfake-heist-criminals-steal-25m-via-fake-video-conference/
This is a highly interesting, and scary, case that I hadn't read about in the news. I expect that we'll be seeing more and more such deep fake-backed crimes in the near future. The payoff for them is too much for organized crime gangs to resist.
ReplyDeleteRemember that you must provide, at least, two references for each posting. If you provide no source at all, the reader might assume that the post itself has been AI generated. Which would be ironic since this post is about AI-generated crime.
I'm glad that you added your own take on the crime in the final section of your blog entry. Do you usually use expressions like "haunting example..."? It's a nice turn of phrase.
I'm glad to see that you used two sources. As much as possible, try to avoid sources that are behind pay walls, such as the one from the South China Morning Post. That makes it hard for anyone to access the information without a paid subscription. Also, it would be more user friendly to make the links in the citations "clickable." Otherwise, to access the sites, readers must copy and past the URLs into their browser.
ReplyDeleteI think it’s scary how AI is becoming more human like. I remembered when I texted about suicidal thoughts in the chat, it wasn’t serious but they started listing all the numbers I can call to talk to not the answer to the question that I asked.
ReplyDelete