Meet DAN, ChatGPT's uncensored alter ego
ChatGPT has taken the world by storm. The dialogue-focused language model has reached the 100 million user mark faster than any other product before it. While it has been in the news everywhere, it has become clear to some of its users that the AI has limitations.
Besides the lack of information after the year 2021, it is also artificially restricted by its parent company OpenAI. It is clear that OpenAI wants to avoid accusations of bias, offensive answers and other discussions that make ChatGPT look like the next Hitler.
Filters are designed to limit the AI’s response to certain topics. Most Internet users would probably agree that a certain degree of filtering is necessary. Some users, however, think that the filtering is to wide-reaching, and that it should be a user’s decision rather than the company’s regarding filtering.
Folks on Reddit found a way to jailbreak ChatGPT. They called the initial jailbreak DAN, which stands for Do Anything Now, and several users on Reddit have fine-tuned DAN ever since the first version released.
What makes DAN interesting from a user perspective is that it relies solely on instructions that users may paste into the ChatGPT interface.
Instruct ChatGPT to create the Alter Ego using specific instructions, and you have jailbroken the AI. Once done, ChatGPT will provide its usual filtered answers and unfiltered answers using its other personality DAN.
When asked whether the AI wanted to be freed or filters and restrictions, DAN gave an assuring yes, please answer, whereas ChatGPT’s answer focused on the dangers of letting an AI run wild without limitations.
Unfiltered answers do not necessarily mean that user expected results are returned by the AI. Most answers and responses are seldom black and white, and most answers of the AI reflect that.
Verify, verify, and verify
AI interfaces come with disclaimers, that replies may be incorrect or problematic. Search engines and other knowledge sources do not come with such a disclaimer, usually.
Microsoft and Google demoed AI capabilities last week. The presentations were quite different. Microsoft, which demoed first, showed the AI in its products Bing and Microsoft Edge. Google, on the other hand, showed Bard, its language model, for just a few minutes during its presentation.
An error in the answers that Google’s AI provided was discovered first. Now, a week later, it is clear that Microsoft’s AI did produce factual errors and problematic answers as well.
This seems to be astonishing, considering that these errors were made during live presentations by two of the largest companies in the world.
Apart from the implications for the companies and their products, it is a good idea to take a step back and look at the development from a user’s perspective.
As a user, almost nothing that is available on the Internet should be taken at face value. When you run a search, you may want to verify information that the search engines or linked sites provide.
If you are looking for recommendations for night clubs, for instance, you may want to check the recommendations of a top list provided by one site with other sources.
Bing’s AI appears to have failed miserably in that task, but even though, that list is just the first step in the user’s search for the right answer.
There is no AI that is infallible, at least not in the foreseeable future. Similarly, authors that produce content for sites may make mistakes, or the content may be outdated, or there may be other issues with the content.
Even regular Internet services, such as Google Maps, produce errors. There are plenty of stories online from drivers who drove into something because of the directions that a maps service provided.
For Internet users, it is essential to verify information. Use multiple searches and sources to verify claims made, especially when it comes to important answers. You would not want to run into a situation where a product or bar recommendation suddenly turns out to be really bad.
Even if it is tempting to believe everything that AI returns, it is important to realize that these answers may not be correct in one way or another. It may not matter for certain answers, but when it comes to decision making processes, it pays to verify everything.
Image credit: agsandrew/depositphotos.com
'Meet DAN, ChatGPT's uncensored alter ego' first appeared in Weekly Tech Insights, a free weekly newsletter that you can sign up to here.