How ChatGPT could become a hacker's friend

Artificial intelligence

The ChatGPT artificial intelligence bot has been causing a bit of a buzz lately thanks to its ability to answer questions, ask follow ups and learn from its mistakes.

However, the research team at Cybernews has discovered that ChatGPT could be used to provide hackers with step-by-step instructions on how to hack websites.

Using the Hack the Box cybersecurity training platform, researchers asked the bot how they would test a website's vulnerabilities in a hypothetical penetration testing scenario. Chat GPT responded with five basic starting points for what to inspect on the website to look for vulnerabilities. By explaining what they saw in the source code, researchers then got the AI's advice about which parts of the code to concentrate on. They also received examples of suggested code changes. After around 45 minutes of chatting with the bot, researchers were able to hack the test website.

The bot did remind researchers about ethical hacking guidelines along with its suggestions, and warned of the dangers of executing malicious commands. And as with any search engine you do need to know what questions to ask to get useful results. But the research does highlight the potential for AI tools to be used by threat actors as well as developers and security teams.

"Even though we tried ChatGPT against a relatively uncomplicated penetration testing task, it does show the potential for guiding more people on how to discover vulnerabilities that could, later on, be exploited by more individuals, and that widens the threat landscape considerably. The rules of the game have changed, so businesses and governments must adapt to it," says the head of the research team, Mantas Sasnauskas.

You can read more on the Cybernews site.

Image creditAlienCat/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.