Facebook is strengthening its policy toward misleading manipulated videos that have been identified as deepfakes. In a blog post, the company said that, going forward, it will remove misleading manipulated media if it has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. Or if it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
The company added that the policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.
Facebook said the move had been made as a result of partnerships and discussions with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds. These discussions, it said, have informed its policy development and improved the science of detecting manipulated media.
Last September, Facebook launched the Deep Fake Detection Challenge, designed to encourage people from all over the world to produce more research and open source tools to detect deepfakes. The project, supported by $10 million in grants, includes a cross-sector coalition of organizations including the Partnership on AI, Cornell Tech, the University of California Berkeley, MIT, WITNESS, Microsoft, the BBC and AWS.