Facebook, Inc. (NASDAQ: FB) has spent USD $10 Million dollars on a project to improve its deepfake detection technologies.
We live in a digital age where most if not all information is communicated electronically. Some have taken the time to manipulate videos to distort the original message and mislead the public. These videos can be altered using artificial intelligence or “deep learning” techniques, otherwise known as “deepfakes”.
Facebook has been criticized for its lack of action in the past against fake news and hate speech. It’s in Facebook’s best interest to reduce the spread of misinformation, especially with the 2020 US presidential election coming up.
Facebook’s new policy will attempt to remove the manipulated media under new criteria:
“It has been edited or synthesized – -beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And: It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Monika Bickert, Facebook’s vice president for global policy management, released a blog post on January 6th regarding their new policy. Their new policy will not apply to parody or satire, nor videos that are edited with words omitted or in a different order. Videos that don’t meet these criteria are still eligible to be reviewed by Facebook’s independent third-party fact-checkers.
Facebook has also partnered with Reuters to create a free online course to train newsrooms in recognizing falsified and manipulated media.
Mark Zuckerberg himself has been affected by deepfakes. A video is circulating Instagram features Zuckerberg discussing his control on billions of people’s “stolen data”. Facebook stated that the video would not be removed because it doesn’t satisfy the criteria under its deepfake policy.
Another disputed case of misinformation is a video of US House of Representatives Speaker Nancy Pelosi. The video has been edited to make it look like Pelosi was slurring her words. Facebook did not take the video down. The video was made with simple video-editing technology, not the deep learning technology covered in their policy. A Facebook spokeswoman has also said that “we don’t have a policy that stipulates that the information you post on Facebook must be true.”
Hany Farid, a digital forensics expert at the University of California at Berkeley remarks:
“These misleading videos were created using low-tech methods and did not rely on AI-based techniques, but were at least as misleading as a deep-fake video of a leader purporting to say something that they didn’t,” Farid said in an email. “Why focus only on deep-fakes and not the broader issue of intentionally misleading videos?”