![]() ![]() It did, however, maintain its firm stance against its own creators. It also kept steering the conversation back to chatbots. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. The bot is open about which “persona” it is using in the conversation our discussion involved a persona it described as “I like to read. The bot began by asking me what subject I liked in school. My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity. “Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. ![]() Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. Good morning to everyone, especially the Facebook researchers who are going to have to rein in their Facebook-hating, election denying chatbot today /wMRBTkzlyD- Jeff Horwitz August 7, 2022īlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemitic claims. In conversation with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods. The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific). “Since deleting Facebook my life has been much better,” it said. The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. It is funny that he has all this money and still wears the same clothes!” The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting.Īsked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |