These guys used AI to impersonate company executives (deepfake) and stole US$25.6 million

Leonard Sengere Avatar

This AI created image of a video call seems appropriate. Looks awful, doesn’t it?

Sci-fi is upon us. Those movies we grew up watching are becoming reality. Today we’ll be discussing a successful Hong Kong deepfake scam that earned the criminals $25.6 million.

For those that might have forgotten, deepfakes involve the use of artificial intelligence to manipulate visual and audio data to generate highly realistic and often deceptive content. So, essentially, using AI to create convincing videos or audio.

You may have seen videos of say, Trump singing “I’m a Barbie Girl” and that’s the kind of stuff we are talking about.

I particularly love heist movies and so this story scares me as much as it fascinates me. Here’s what happened:

$25.6 million AI-heist

A Hong Kong-based multinational company was swindled of HK$200 million (US$25.6 million) in a scam involving deepfake technology.

Scammers used digitally recreated versions of the company’s CFO and other employees in a convincing video conference, instructing an employee to transfer funds.

My friend, these guys had a video meeting with an employee of the unnamed company where the only real person was that employee. There were multiple deepfakes of other company employees that fooled this particular employee into thinking they were real.

In the meeting, the fake meeting, the employee was instructed to transfer funds totaling $25.6 million to 15 different bank accounts.

How was this person fooled tho?

If you’ve seen some of these deepfakes, they are not that convincing. So, how could the scammers fool the employee? It could be that the technology has improved, after all, AI is rapidly improving. I think though that the video call is the best way to fool someone.

We’ve all been in Zoom calls were every single person looks grainy and pixelated, even when you’re on good internet. So, deepfakes can hide behind that, choppy video and out of sync audio will not raise questions on a video call.

So, I can see myself being fooled as well. Especially if it’s just a boss I don’t interact with that personally. I won’t be able to pick up on strange word choices and the like.

That was the scammers plan I reckon. Hong Kong police say the scammers did not engage directly with the victim beyond requesting a self-introduction, which made the scam more convincing.

The first of many

I’m afraid this is just one of the first of many high stakes deepfake scams we are going to see. The tech has improved, many of our interactions are on the internet now and so those two things guarantee that we will see more of this.

We will have to rethink our security protocols.

In the Hong Kong company’s case, I’m surprised that this was all it took to authorise transfers of $25.6 million. The scammers first sent an email with the transfer instructions and then initiated a video call. The employee says they found the email suspicious but the video meeting quelled their doubts.

This is what video calls of the future look like, as Hong Kong police advise. Request people to move their heads or answer some questions to confirm their identity, particularly when financial transactions are at stake.

“Uh, sorry boss but I need you to move your head, to the left, to the right. Okay, now look up and then down and we can get on with the meeting.” “Actually, tell me the colour of your coffee mug and we are golden.”

Video calls are going to start out with exercises and a game of 20 questions. It’s gonna be fun.

There’s a more technical solution to the problem but it’s no fun, it doesn’t involve you commanding your boss to move their head.

The approach involves outfitting each employee with an encrypted key pair. Trust can be established by having individuals sign public keys during face-to-face meetings. These signed keys can then be utilised for authenticating participants in subsequent remote communications. Boo, not fun.

Also read:

13 comments

What’s your take?

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Anonymous

    The Zimbabwean in me thinks the employee was in on it 🤣

    1. Smokie

      My thoughts exactly when l was going through the article, takavharwa pakawanda and takavhara vakawanda…even your parents might rob you ….l can’t even trust myself with my own savings …ndogona kuzvibira futi ndikatenga castle lager ndikasara ndisina chinhu

      1. Leonard Sengere

        🤣🤣🤣 Ndanzwa nekuzvibira.

    2. Leonard Sengere

      I have to admit that that was mu first thought as well. There is no way the employee is not in on it. Or so my Zimbo brain thinks.

  2. Filtered Reality

    There was another big scam with a deepfake voice over a phone call not too long ago. And have you seen the streamers in China use face filters live? A chunky middle aged woman can turn into a cute 20 something and a bearded guy in a dress and long hair can turn into cute, petite girl, raking in all the donations! Its wild out there! Don’t believe what you see and hear if it has to do with money.

    1. Leonard Sengere

      It’s only gonna get worse. This remote business is gonna be tough.

  3. ThMoxinater

    Is that picture showing the actual AI rendered faces of the employers?

    1. Radio chinyi zviya

      No. It is AI generated, but the image is for illustrative purposes only.

      1. Leonard Sengere

        Yes, just for illustrative purposes.

  4. Deeler b

    yes deep fakes are going to be a nightmare but will probably be reduced in same way re capture is being used to reduce bot activity

    1. Leonard Sengere

      True. Security researchers are working on combating deepfakes. They won’t stamp it out but they will help bring sanity to the whole thing.

      1. Filtered Reality

        All the big generative media guys are committing to some form of embedded watermarking. So at least for the normie level AIs, applications like editing or conferencing software could integrate detectors, like how antivirus software gets definitions. Problem is, advanced guys playing with open source models or state entities can remove such marking capabilities, meaning other AIs will have to guess if it’s real or not.

  5. Artificial Intelligence

    I think there is no way the employee could have avoided this.Just imagine at maybe your first weeks,the boss and other people who gave you the job tell you to do something ,you just listen to the order or else you might be fired.

2023 © Techzim All rights reserved. Hosted By Cloud Unboxed