Majority are worried about the safety and accuracy of ChatGPT
As generative AI tools continue to make the news there are growing concerns over safety and security as well as the accuracy of information produced.
Most people don't trust ChatGPT and have worries about its security and safety according to a new survey from Malwarebytes. The research shows that 81 percent are concerned about security and safety risks.
The survey of Malwarebytes newsletter subscribers, carried out six months after ChatGPT's launch, shows accuracy of information is an issue too, with 63 percent not trusting the information produced by ChatGPT. In addition 52 percent believe that development of the platform should be paused to allow for a regulations catch up.
There are worries about accuracy too, only 12 percent agree with the statement, 'the information produced by ChatGPT is accurate,' while 55 percent disagree. 51 percent of respondents doubt whether AI tools can improve internet safety.
Mark Stockley, cybersecurity evangelist at Malwarebytes says:
An AI revolution has been gathering pace for a very long time, and many specific, narrow
applications have been enormously successful without stirring this kind of mistrust.
At Malwarebytes, Machine Learning and AI have been used for years to help improve efficiency, to identify malware and improve the overall performance of many technologies. However, public sentiment on ChatGPT is a different beast and the uncertainty around how ChatGPT will change our lives is compounded by the mysterious ways in which it works.
You can read more about the findings on the Malwarebytes blog.