ChatGPT's role in the fake news phenomenon

Following its explosion onto the scene in November 2022, it has been hard to ignore ChatGPT. With the ability to answer questions, solve problems, and create content -- to name just a few of its competencies -- the artificial intelligence (AI) chatbot can be hugely beneficial to businesses and employees. Whether used to avoid trawling the internet for the answer to a question, write a blog post, or simply inspire an idea for a new product, it can certainly help cut costs and save time and resources.

Yet, the use of ChatGPT has caused a lot of debate and controversy. One of the main areas of concern is around employment -- if AI can do the same, if not a better, job than humans, for a fraction of the cost, are business leaders likely to replace humans with this technology? Goldman Sachs has predicted that as many as 300 million full-time jobs could be diminished or lost to AI and automation technology. However, it is not as straightforward as some of the most pessimistic outlooks make it seem.

The generation of fake news

Both the genius and downfall of ChatGPT is that it does not work like a search engine does. Whilst the initial process is the same – collecting information from a database -- ChatGPT then creates an answer by making a series of guesses based on the information that it finds, rather than relaying it exactly as it is found like a search engine does. Although, the answers it generates are clear and appear to be accurate -- in fact, some sources say it reaches an accuracy rate of 99 percent -- so how can we be sure if they are correct? After all, if you have asked ChatGPT, it is most likely because you do not know the answer yourself.

The fact is that information can get misinterpreted or miscommunicated in the series of guesses that ChatGPT makes to create a conversational answer to your question. ChatGPT is not yet at a stage where it can understand the complexity of language 100 percent of the time. Therefore, we are at risk of falling for mistakes, such as the 'Let’s eat Grandma!' vs. 'Let’s eat, Grandma!' debacle.

Whilst I hope that most users would pick up on mistakes that involve eating grandmas, others may not be as obvious -- especially if you’re less knowledgeable about a topic. For this reason, the answer may sound correct and, thus it is taken as fact. If it is then published online or used in a piece of academic work, it will be cemented as fact and will contribute to the fake news that currently makes up 62 percent of all internet information.    

This large proportion of fake news is also of concern when it comes to ChatGPT sourcing the information to answer the question. If over half of the internet’s information is false, it is highly likely that ChatGPT will consume those sources when collecting the data to answer your question. Therefore, even if it does interpret the information correctly, it can still provide a false answer.

In addition, with over 100 million users across the world, the likelihood that users are asking it the same questions is high. Whilst it does not give the exact same answer every time, using information from the same database will give similar answers, so if these are inaccurate, the wrong information could end up on the internet multiple times. Once statistics or a statement has been repeated several times, it becomes trusted by others and considered fact rather than fiction.  

So, what does this mean?

Some businesses, and even whole countries, take these concerns as evidence that there should be a blanket ban on the use of ChatGPT. This is not necessarily the best course of action. With some additional expertise applied to double check the information it has provided, ChatGPT can have huge benefits for businesses, especially in the current economic climate where everyone is having to do more with less. ChatGPT can take some of the burden off by coming up with creative ideas or taking on the manual, repetitive tasks that take up so much time.

This is also not to say that ChatGPT will take over everyone’s jobs. Until the accuracy of ChatGPT improves, at least, our jobs are safe. Consider it more of an 'AI coworker' who can take some of your workload and assist you in your job, but ultimately your expertise, knowledge, and reasoning will always be valuable and needed to give the final sign off. Even once AI has been trained to a higher level, without the ability to reason, a chatbot or other automation technology is unlikely to be able to complete the high level tasks to justify replacing humans completely.

So, no matter what extent ChatGPT or similar applications are introduced into your work, or life in general, proceed with caution, and always apply your own or seek alternative expertise to avoid being led astray by the chatbots.

Image credit: Mega Pixel / Shutterstock

Jason Gerrard is Senior Director of International Systems Engineering at Commvault.

5 Responses to ChatGPT's role in the fake news phenomenon

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.