With the recent rise in ChatGPT and other Artificial Intelligence chatbots, spinoff bots with fewer or even no restrictions have inevitably arisen as well. This has led to the free-speech debate over whether chatbots should be moderated or be uncensored, and who can decide. A few notable examples include FreedomGPT and GPT4All, and they have both been created by independent programmers or volunteers who have earned little or no money from it.
Most groups making unmoderated chatbots build off pre-existing A.I. models, in which they only make a few changes and tweaks for their own needs. Eric Hartford, a developer for unmoderated chatbot WizardLM-Uncensored, says, “This is about ownership and control. If I ask my model a question, I want an answer, I do not want it arguing with me.” Even though these unmoderated chatbots offer many new possibilities, they present thorny issues in online spaces.
First, uncensored bots offer flexibility that ChatGPT cannot. For starters, users can converse with the chatbots in private without companies watching over them. They can also employ these unmoderated chatbots to fit their own needs, training it on personal information such as emails or messages without needing to worry about any privacy breaches, for example. Plus, these small chatbot companies can update their A.I. chatbots a lot quicker and can come up with more clever add-ons.
However, there are still many issues with these moderated chatbots. Due to their lack of censorship, users aren’t protected from the harm they can potentially cause. For example, they can easily spread misinformation and falsehoods, as well as talk about suicide how to kill yourself. Since they learn from humans, they can easily become hateful or spread rumors that could ruin your life.
Larger corporations that work with A.I. tools now need to protect their reputation against these smaller, unmoderated chatbot companies. Oren Etzioni, a professor at the University of Washington, states, “The concern is completely legitimate and clear: These chatbots can and will say anything if left to their own devices. They’re not going to censor themselves. So now the question becomes, what is an appropriate solution in a society that prizes free speech?”
Most groups making unmoderated chatbots build off pre-existing A.I. models, in which they only make a few changes and tweaks for their own needs. Eric Hartford, a developer for unmoderated chatbot WizardLM-Uncensored, says, “This is about ownership and control. If I ask my model a question, I want an answer, I do not want it arguing with me.” Even though these unmoderated chatbots offer many new possibilities, they present thorny issues in online spaces.
First, uncensored bots offer flexibility that ChatGPT cannot. For starters, users can converse with the chatbots in private without companies watching over them. They can also employ these unmoderated chatbots to fit their own needs, training it on personal information such as emails or messages without needing to worry about any privacy breaches, for example. Plus, these small chatbot companies can update their A.I. chatbots a lot quicker and can come up with more clever add-ons.
However, there are still many issues with these moderated chatbots. Due to their lack of censorship, users aren’t protected from the harm they can potentially cause. For example, they can easily spread misinformation and falsehoods, as well as talk about suicide how to kill yourself. Since they learn from humans, they can easily become hateful or spread rumors that could ruin your life.
Larger corporations that work with A.I. tools now need to protect their reputation against these smaller, unmoderated chatbot companies. Oren Etzioni, a professor at the University of Washington, states, “The concern is completely legitimate and clear: These chatbots can and will say anything if left to their own devices. They’re not going to censor themselves. So now the question becomes, what is an appropriate solution in a society that prizes free speech?”