Who is working to end the threat of deepfakes created by artificial intelligence

Who is working to end the threat of deepfakes created by artificial intelligence

A diagram showing an image manipulated by artificial intelligence to show two men dancing and the other to show an unrealistic image of the same wave.

The above images of Trevor Noah and Michael Kosta show what happens when they are placed through an AI image generator with the “Two Men Dancing” prompt, as well as whether or not the image has been modified to reject AI image manipulation.
picture: Alexander Madry

Like many of the world’s best and worst ideas, MIT researchers’ plan to combat AI-induced deepfakes came to light when someone watched their favorite news show.

On the October 25 episode of The Daily Show with Trevor Noah, OpenAI’s chief technology officer, Mira Moratti I talked about the images created by artificial intelligence. While she will likely discuss OpenAI’s DALL-E 2 AI image generator in great detail, it wasn’t a very in-depth interview. After all it was put to all of the people who likely understand little or nothing about the art of AI. However, it did provide some nuggets of thought. Noah Moratti asked if there was a way to ensure that AI programs don’t lead us into a world where “nothing is real, and everything is real, right?”

Last week, researchers at the Massachusetts Institute of Technology said they wanted to answer that question. They have created a relatively simple program that can use data poisoning techniques to essentially disable pixels within an image to create invisible noise, rendering AI art generators unable to generate realistic. deep fake Based on the images being fed. Alexandre Madre, a computer professor at MIT, worked with the team of researchers to develop the software and post their findings to Twitter and Lab Blog.

Using photos of Noah with Daily Show comedian Michael Costa, they showed how this imperceptible image blurring model disrupts the AI ​​image generator from creating a new image using the original template. The researchers suggested that anyone planning to upload an image to the Internet could run their image through their software, essentially immunizing it for artificial intelligence image generators.

Hadi Salman, a PHD student at MIT whose work revolves around machine learning models, told Gizmodo in a phone interview that the system he helped develop only takes a few seconds to introduce noise into a photo. Higher resolution images work even better, he said, since they include more pixels that can be minutely disturbed.

Google is creating its own AI image generator called Imagen, though few people have been able to put their system through its paces. The company is also working on a AI video system. Salman said they haven’t tested their system on video, but in theory it should still work, although MIT’s software should individually mock each frame in the video, which can be tens of thousands of frames for any video longer than a few minutes.

Can data poisoning be applied to AI generators on a large scale?

Salman said he could imagine a future where companies, even those that produce AI models, can certify that uploaded photos are immune to AI models. Of course, this isn’t good news for the millions of images already uploaded to an open source library like LAION, but it will likely make a difference for any image uploaded in the future.

Madre also told Gizmodo over the phone that this system, while successful in many of their tests, is more of a proof of concept than a product version of any kind. The researchers’ program proves that there are ways to beat deepfakes before they happen.

He said companies need to know this technology and apply it in their own systems to make them more tamper-resistant. Moreso, companies will need to ensure that future deliveries of their propagation models, or any other type of AI image generator, will not be able to ignore noise and create new fake images.

Above left is the original photo with Trevor Noah and Michael Costa.  At the top on the right is an image created with an AI image generator, and at the bottom right is what happened when the AI ​​researchers tried the same thing, but inserted imperceptible noise into the original image.

Above left is the original photo with Trevor Noah and Michael Costa. At the top on the right is an image created with an AI image generator, and at the bottom right is what happened when the AI ​​researchers tried the same thing, but inserted imperceptible noise into the original image.
picture: MIT/Aleksander Madry/Gizmodo

“What really has to happen going forward is that all of the companies that develop dispersal models have to provide the ability for healthy and robust immunization,” Madhuri said.

Other machine learning experts have found some points to criticize the MIT researchers.

Florian Tramer, Professor of Computer Science at ETH Zurich in Switzerland, chirp The main difficulty is that you are basically trying to deceive all future attempts to create a deep fake of some form. Tramèr was co-author of a 2021 sheets It was published by the International Conference on Learning Representations which essentially found that data poisoning, like what the MIT system does with its image noise, will not prevent future systems from finding ways to overcome it. What’s more, creating such data poisoning systems would create an “arms race” between commercial AI image generators and those trying to prevent deepfakes.

There have been other data poisoning programs intended to deal with AI-based monitoring, such as Fox (Yes, like November 5th), developed by researchers at the University of Chicago. Fawkes also distorts pixels in photos in a way that hinders companies like Clearview from achieving accurate facial recognition. Other researchers from the University of Melbourne in Australia and Peking University in China have also analyzed possible systems that can create “Examples that cannot be learnedwhich artificial intelligence image generators cannot use.

The problem, as Fox developer Emily Wenger pointed out in an interview with MIT Technology ReviewPrograms like Microsoft Azure have been able to beat Fawkes and face detection despite their aggressive technologies.

Gautam Kamath, a professor of computer science at the University of Waterloo in Ontario, Canada, told Gizmodo in an interview with Zoom that in a “cat-and-mouse game” between those who try to create artificial intelligence models and those who find ways to defeat them, the people who do the manufacturing seem to be intelligent systems. The new artificial ones have the advantage because once the image appears on the Internet, it will never go away. Therefore, if an AI system can bypass attempts to prevent it from being a deepfake, there is no real way to cure it.

“It is possible, if not likely, that in the future we will be able to evade any defenses you place on that particular image,” Kamath said. “And once it’s there, you can’t get it back.”

Of course there is Some AI systems that can detect deepfake videosthere are ways Training people to spot small inconsistencies showing a falsified video. The question is: Will there ever come a time when neither a human nor a machine can discern whether an image or video has been manipulated?

What about the largest AI generators companies?

For Madhuri and Salman, the answer lies in getting AI companies to play ball. Madhuri said they are looking forward to reaching out to some of the major AI generator companies to see if they are interested in facilitating their proposed system, although of course it is still early days and the MIT team is still working on a public API that will allow users to immunize Their own photos (code available over here).

In this way, it all depends on the people who make the AI ​​photo platforms. While OpenAI’s Moratti told Noah on that October episode that they have “some firewalls” for their system, they further claimed that they don’t allow people to create images based on public figures (a somewhat ambiguous term in the age of social media where it has a public face). The team is also working on more filters that will restrict the system from creating images that contain violent or sexual images.

Back in September, OpenAI . announced Users can upload human faces again for their system, but they claimed to have built ways to prevent users from showing faces in violent or sexual contexts. It also asked users not to upload photos of people without their consent, but there are plenty of requests from the public internet to make promises without crossing their fingers.

However, this does not mean that other AI generators and the people who made them are just a game in modifying user-generated content. Stability AI, the company behind Stable Diffusion, has proven more reluctant to introduce any barriers to people creating porn or derivative artwork using their system. While OpenAI, importantly, has been open about trying to prevent their system from showing bias in the images it generates, Stability AI has kept silent.

Imad Mostaq, CEO of Stability AI, advocated for a system without governmental or institutional influence, and so far has attack response Against calls to place more constraints on its AI model. he has He said he believed in the image of the generation It will be “solved within a year” allowing users to create “anything you can dream of”. Of course, this is just hype talk, but it shows that Mustak is not willing to back down from seeing technology push itself further and further.

MIT researchers remain steadfast.

“I think there are a lot of uncomfortable questions around the world when this type of technology will be accessible and, again, really accessible and will be more user-friendly,” Madhuri said. “We’re really happy, and we’re really excited about the fact that we can now do something about this consensually.”

#working #threat #deepfakes #created #artificial #intelligence

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
سيتات آورج 2022 سيتات آورج 2022