Deepfakes still have a bright future ahead of them, it would seem. It is still possible to thwart the recognition of deepfakes by even the most highly developed detectors, according to scientists at the University of San Diego. By inserting "adversarial examples" into each frame, artificial intelligence can be fooled. An alarming observation for scientists who are pushing to improve detection systems to better detect these faked videos.
During the last WACV 2021 (Winter Conference on Applications of Computer Vision), which took place from Jan 5 to 9, scientists from the University of San Diego have demonstrated that deepfake detectors have a weak point.
According to these professionals, by using "adversarial examples" in each shot of the video, artificial intelligence could make a mistake and designate a deepfake video as true.
These "adversarial examples" are in fact slightly manipulated inputs that can cause the artificial intelligence to make mistakes. To recognise deepfakes, the detectors focus on facial features, especially eye movement like blinking, which is usually poorly reproduced in these fake videos.