Advertisement

Can you spot deepfakes and fake videos? London Mayor says fake AI audio of him nearly caused ‘disorder’

Does this look real to you? It’s actually an AI-generated image  (SamMino /Pixabay)
Does this look real to you? It’s actually an AI-generated image (SamMino /Pixabay)

London Mayor Sadiq Khan said the AI-generated audio clip of him making inflammatory remarks before Armistice Day nearly caused “serious disorder”.

Mr Khan warned the existing laws covering deepfakes are not “fit for purpose”.

The fake audio clip of the mayor making disparaging comments about Remembrance Weekend went viral last November. In the clip, he can be heard saying the pro-Palestine march planned on the same weekend should be prioritised.

Imitating Mr Khan, the AI-generated voice said: “What's important and paramount is the one-million-man Palestinian march takes place on Saturday."

The voice can be heard saying: “I control the Met Police, they will do as the Mayor of London tells them,” and "the British public need to get a grip”.

Mr Khan told BBC Radio 4’s Why Do You Hate Me? podcast the “deeply upsetting” audio sounded a lot like him.

He said: “You know, we did get concerned very quickly about what impression it may create. I've got to be honest, it did sound a lot like me.

“When you've got friends and family who see this stuff, it's deeply upsetting. I mean, I've got two daughters, a wife, I've got, you know, siblings. I've got a mum,” he added.

The clip quickly became viral among far-right groups, triggering harsh comments against the mayor.

One user who shared the clip on social media told the BBC he regretted it and that he had “made a big mistake”.

Fake images and AI-generated deepfakes where celebrities’ heads are superimposed on to the bodies of other people are spreading on social media. The most recent case that sparked an international outrage involved Taylor Swift, after explicit deepfakes of the singer were circulated to millions online on social media.

There are rising concerns this will lead to an epidemic of misinformation on social media.

There are lighter and darker sides to the phenomenon. Recently, Nicki Minaj reacted on Twitter to an episode of ITVX’s comedy Deep Fake Neighbour Wars, in which Minaj and Tom Holland live together and have Mark Zuckerberg as a neighbour.

In the show, the stars’ faces are deepfaked, but their voices are performed by somewhat over-the-top impersonators. “HELP!!! What in the AI shapeshifting cloning conspiracy theory is this?!?!! I hope the whole internet get deleted!!!” Minaj wrote on Twitter.

In more serious news, MoneySavingExpert founder Martin Lewis has warned deepfake videos are being used as part of investment scams.

Deepfake versions of Lewis’s own voice and face are seen in these scam videos, conning the less tech-savvy viewer into believing the scheme was real. “Govt & regulators must step up to stop big tech publishing such dangerous fakes. People’ll lose money and it’ll ruin lives,” Lewis wrote.

It’s time to get schooled up about deepfaked content. Here’s our guide on how you can spot and verify both AI-generated images and deepfake videos.

How to detect an AI-generated image

Twins, or just one face that has been distorted and replicated to make an image that is rather creepy on second look? (1tamara2/Pixabay)
Twins, or just one face that has been distorted and replicated to make an image that is rather creepy on second look? (1tamara2/Pixabay)

If you see an image on the internet of an unusual event or a person in a compromising position, here are some steps you should take:

1. Verify the source

First, check out the source of the information you have received. Is it a news organisation, a government organisation or the verified account of a celebrity? Or is it just impersonating a legitimate organisation?

And is this something the famous figure would usually do? This image of Pope Francis wearing a Balenciaga jacket looks real, but press images have never shown him wearing anything other than papal robes.

2. Turn on the news

If something major has happened in the world, chances are the international news organisations will know about it first, to say nothing of the local channels near you. So you should be able to see breaking news alerts on TV, radio, news websites and news aggregator websites like Reddit.

But if the only people talking about the issue are on social media and there’s no live video footage, it could be a scam, like these fake photos that went viral in March, depicting Donald Trump being chased and arrested.

If you’re ever not sure about an image that has gone viral or the news story or incident connected to it, check out Snopes – the oldest and largest fact-checking website, which publishes investigative journalism focused on facts and debunking politicians, hoaxes and urban legends.

3. Look at the image critically

An AI-generated image of a rabbit and some carrots that looks more noticeably fake, if you look closely (Susan Cipriano/Pixabay)
An AI-generated image of a rabbit and some carrots that looks more noticeably fake, if you look closely (Susan Cipriano/Pixabay)

It’s also a good idea to look at the image critically. We now know that if you were to look at a photo of a person and there is a random stray arm, leg or hand, then that the image has been Photoshopped.

Although AI-generated images are usually more convincing than that, there’s always a certain quality about the image that looks rather unreal, if you look at it closely. Take this rabbit, for instance.

Look up “rabbit” on Google Images and you will see many different breeds of bunnies. Take another look at this AI-generated image of the rabbit, and you will notice that its face and body have been artificially generated from the faces of several rabbits.

The carrots next to it are also artificial-looking and each has an almost identical shape. If you’ve ever had to chop up a bag of carrots, you’d know that they are all different from each other.

4. Use an AI image detector

When in doubt, always run the image through an AI image detector. You can do this on the PC by right-clicking the image on Twitter and clicking “Save image as...” on the menu that appears.

Then go to one of these free AI image detector services Illuminarty, Optic AI or Not and Everypixel Aesthetics.

This photo of a new-born infant, which looks quite believable, is actually an AI-generated image (Vicki Hamilton/Pixabay)
This photo of a new-born infant, which looks quite believable, is actually an AI-generated image (Vicki Hamilton/Pixabay)

These services use a type of AI called neural networks that use computer algorithms to analyse images and compare them to known key traits, patterns and characteristics of various AI models and typical images made by humans to determine the origin of the content.

The AI image detectors will scan the image you upload and, within seconds, provide you with a statistic on how likely it is that an image is fake. You can play around with the services by uploading one of your own photos and comparing it to how it rates against an AI-generated picture – there should be quite a big percentage difference between the real photo and the fake one.

How to spot deepfake videos

It is always worth using your common sense.

If a celebrity is dead, like John Lennon from the Beatles, he’s unlikely to be talking to a high-definition camera in a modern TV studio, like this example shared by generative AI video platform HeyGen.

It’s a bit less obvious when people use deepfake technology to create videos of politicians, but it’s still possible to tell that it’s fake.

First, check to see whether the lips sync up with what the figure on screen is saying. Then listen out for the voice. Do the accent and cadence sound like a real video of the person? Usually, there will be a slight difference.

You should also look closely at the video to see if the lighting and shadows look strange, if the head movement looks like it belongs to someone else’s head or if there are unusual skin tones.

Although some people have used deepfake technology for trickery, other content creators are now using AI to help them create new entertaining content.

A YouTube channel called Fantasy Images, which has almost 59,000 subscribers, primarily makes humourous parody videos showing characters from popular fantasy films, reimagined as fitness-mad gym addicts.

The channel’s Harry Spotter – The boy who lifted video, which has been viewed 3.8 million times in the last month, is made using the Midjourney generative AI art service, together with speech generated by fake voice cloning software ElvenLabs.