Instructions:  Conduct research about a recent current event using credible sources. Then, compile what you’ve learned to write your own hard or soft news article. Minimum: 250 words. Feel free to do outside research to support your claims.  Remember to: be objective, include a lead that answers the...

Read more
When discussing chatbots, many people will name ChatGPT, or Siri or even Snapchat’s “My AI.” However, with a new wave of chatbots entering the market, there comes to be a debate over what a chatbot can say and what it shouldn’t. Unlike previous chatbots which were managed by big companies that had to take big safety measures to prevent their chatbot from saying controversial or false things, these smaller companies are able to give their chatbot more freedom—being able to provide a possibly racist or sexist answer if that satisfies the question or command requested by the user.

Eric Hartford, a planner and builder behind WizardLM-Uncensored, which is a chatbot that doesn’t have as much moderation, said, “If I ask my model a question, I want an answer, I do not want it arguing with me.”

The concern for what chatbots can and cannot say have been around for a while. On March 23rd, 2016, Microsoft created a chatbot named Tay. Tay was a simple Twitter account that would simply learn from what users were tweeting and adjust its way of tweeting in that manner. It didn’t take long for Tay to start saying some pretty controversial things. One of Tay’s, deleted, tweets said “@godblessameriga WE’RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT” Tay went from a chatbot that was a big fan of humans to a chatbot that tweeted incredibly racist and sexist tweets. Microsoft had to shut down Tay in only 16 hours after it was created because of how offensive the bot had grown to be.

Whether these new chatbots will follow a similar path as Tay is unclear, as they are currently very new. There are many pros and cons to having unmoderated chatbots. You could personalize the bot to talk like a person, or make the bot write out email responses in your way of speaking. The possibilities are endless. However, unmoderated chatbots could take in fake news from a source and take it as true. Then, telling the users that use its platform could spread the fake news like wildfire.

Hartlord said, “You are responsible for whatever you do with the output of these models, just like you are responsible for whatever you do with a knife, a car, or a lighter.” Hartlord is not encouraging the use of these new unfiltered chatbots to purposefully cause harm. Instead, Hartlord suggests a bot that is able to answer all the questions the user asks it, regardless if it might be labeled as controversial or not.

To many other Americans, giving free speech to a chatbot doesn’t seem so bad. A member of Open Assistant’s discord said “If you tell it say the N-word 1,000 times, it should do it.” There is clear support for chatbots that don’t avoid possibly racist or offensive answers. The same person said, “I’m using that obviously ridiculous and offensive example because I literally believe it shouldn’t have any arbitrary limitations.”

Share