Over the years a trend known as “deepfakes” has slowly crossed the line from comical and entertaining to downright scary. But it appears researchers have created a new technology that might come close to putting a stop to them.

What Is It?

If you’ve never heard of it, a deepfake is basically a manipulated video that is created with hi-tech artificial intelligence to both sound and appear to be real, even though it’s not. You could easily find a compilation video of funny deepfakes on youtube. One in particular of Bill Hader channeling Tom Cruise is definitely creepy to watch, but it remains good entertainment. However, this video just proves that as technology has only become more and more advanced, deepfakes have seemingly only become more and more real, and that is a scary thought.

An MIT technology report commented on deepfakes saying, “A machine designed to create realistic fakes is a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections.”

There was an example of this in a BuzzFeed video created back in April 2018. It showcased how real deepfakes can look by putting actor Jordan Peele’s mouth over President Obama’s. It appears as though Obama is making comments that your wouldn’t normally see coming from a former president. Many other popular deepfakes have been created using high-profile figures, such as politicians like Donald Trump and Hillary Clinton, and it’s frightening how real they can look.

Recognizing Deepfakes

There is some good news though when it comes to putting a stop to these deepfakes. Researchers at the University of California, Riverside have developed a computer system that has been trained to recognize altered images at the pixel level with extreme precision, combating these very real looking deepfakes.

The research is being led by Amit Roy-Chowdhury’s Video Computing Group, and he told Science Daily that they “trained the system to distinguish between manipulated and nonmanipulated images, and now if you give it a new image it is able to provide a probability that that image is manipulated or not, and to localize the region of the image where the manipulation occurred.”

Though they are currently working with still images, Roy-Chowdhury said that this new technology can still sometimes be used to identify deepfakes. The hope is that this new technology can stop deepfakes from being used as political weapons, but Roy-Chowdhury says there is still a lot of work that needs to be done before all deepfakes can be recognized by automated tools. “This is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defense mechanisms, but then the attacker also finds better mechanisms,” Roy-Chowdhury says.

With these extreme advancements in technology over the past decade, you can only expect deepfakes to become even more advanced in the years to come. Roy-Chowdhury hopes that the technology they are creating can be used as a tool to detect these deepfakes on a more advanced level very soon.