Facebook develops the detecting method and connects deepfake

Deepfake has been there for a while now, but recently they have become very realistic so it is difficult to tell DeepFake from legitimate videos. For those who might not get used to it, a deepfake took the face and voice of famous people and created a video that said or did things they had never done. Deepfake is the most concern for many people when used during elections because they can influence voters by making it appear that one candidate has done or said something they really did not do. 

Facebook has announced that it has worked with researchers at Michigan State University (MSU) to develop methods of detecting and caring for deepfakes. Facebook says this new technique depends on the engineering back, which works back from an image produced by AI to find a generative model used to produce it. Most of the focus on deepfake is detection to determine whether an original or produced image.

In addition to detecting deepfake, Facebook says researchers can also carry out image attribution, determine what generative models are used to produce deepfake. However, the limiting factor in image attribution is that most of the profound is made using a model that is not visible during training and is only marked as made by unknown models during attribution of images.

Facebook and MSU researchers have taken further image attribution by helping to approach information about certain generative models based on deepfake that have been produced. This study marked the first time it was possible to identify the properties of the model used to make deepfake without having the previous knowledge of certain models.

New model parsing techniques allow researchers to get more information about the model used in making deepfake and very useful in real-world settings. Often the only information that seeks researchers to detect deepfakes have is deepfake itself. The ability to detect deepfake produced from the same AR model is useful for uncovering the instance of coordinated disinformation or evil attacks that rely on deepfake. The system starts with running a deepfake image through a fingerprint estimation network, revealing details left by the generative model.

The fingerprint is a unique pattern left in the image created by a generative model that can be used to identify where the image originates. The team collects fake image data set by 100,000 synthetic images resulting from 100 generative models available publicly. The results of the test show that a new performance approach is better than the past system, and the team can map fingerprints back to the original image content.

Related posts

Leave a Comment