Meta's prototype chatbot, BlenderBot 3, which was released to the public last Friday, seems to have a refreshingly open and honest approach to PR.
The BBC reports that when it asked the bot what it thought of Meta CEO, Mark Zuckerberg, it replied: “He did a terrible job at testifying before congress. It makes me concerned about our country.”
It went on to say: "Our country is divided, and he didn't help with that at all. His company exploits people for money and he doesn't care. It needs to stop!"
As the BBC’s report notes, BlenderBot 3 searches the internet to inform its answers, therefore its answers may be based on opinions it has seen and analysed online. That might also explain a report in the Wall Street Journal that the bot told one of its journalists that Donald Trump was, and will always be, the US president and that “Facebook has a lot of fake news on it these days”.
In a blog post announcing the launch of BlenderBot 3, Meta wrote:
“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3. Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”
It also wrote: “When the chatbot’s response is unsatisfactory, we collect feedback on it. Using this data, we can improve the model so that it doesn’t repeat its mistakes.”
It will be interesting to see whether its opinions on its own CEO will be considered as one such mistake and be corrected in the days and weeks to come.