An earlier version of Mark Zuckerberg's 6,000-word manifesto for Facebook revealed his belief that artificial intelligence could one day be used to monitor private messages for terrorists scheming an attack.
The text eventually published by Zuckerberg on Thursday did detail how Facebook is using AI today to flag terrorist propaganda in public posts. However, as spotted by Mashable, Zuckerberg cut out a key line outlining his vision that AI could be used in future to prevent terrorist attacks and online bullying by monitoring private channels.
In a version of his manifesto given to media before posting it on Facebook, Zuckerberg wrote: "The long-term promise of AI is that in addition to identifying risks more quickly and accurately than would have already happened, it may also identify risks that nobody would have flagged at all, including terrorists planning attacks using private channels, people bullying someone too afraid to report it themselves, and other issues both local and global. It will take many years to develop these systems."
Facebook confirmed to Mashable that the statement in the earlier version had been revised to: "Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community."
One possible explanation for removing the passage is that it conflicts with a later passage outlining how Facebook's safety measures needn't compromise privacy.
"As we discuss keeping our community safe, it is important to emphasize that part of keeping people safe is protecting individual security and liberty. We are strong advocates of encryption and have built it into the largest messaging platforms in the world, WhatsApp and Messenger. Keeping our community safe does not require compromising privacy," Zuckerberg wrote.
The removed line raises questions about Zuckerberg's long-term vision for user privacy, but he may also have been thinking about how Facebook already uses analytics to weed out spam from end-to-end encrypted messages on WhatsApp.
Zuckerberg points out that since bringing end-to-end encryption to WhatsApp, Facebook has cut spam and malicious content by more than 75 percent. But how would WhatsApp know the content of a message is spam if an end-to-end encrypted message isn't meant to be viewable by anyone but the sender and receiver?
A WhatsApp engineer earlier this month said WhatsApp is looking for spammer behavior in signals from data that isn't encrypted. As reported by NetworkWorld, these include reputation scores for the internet and mobile providers carrying the spammer's messages, the sender's location, and whether certain phone number have been linked to spam previously.
More on Facebook and privacy
- How to mute Facebook's new auto-playing video feature
- Privacy 101: How your fingerprint could actually make your iPhone less secure
- How to handle changing privacy rules around the globe
- Facebook's Q4 beats across the board, monthly users up 17 percent
- Facebook's Oculus ordered to pay $500 million to ZeniMax