OpenAI framework used to create undetectable malware

malware monitor display screen

Some of the most notable machine learning tools can be hijacked in order to create super-powerful malware capable of bypassing most anti-virus systems, researchers have claimed.

At the recent DEF CON event, security company Endgame revealed how it created customized malware using Elon Musk's own OpenAI framework that security engines were unable to detect.

Endgame's research was based around taking binaries that appeared to be malicious, and by changing a few parts, that code could appear benign to antivirus engines.

The company's technical director of data science, Hyrum Anderson, highlighted that even changing some small details could allow it to bypass AV engines and explained how machine learning models could be hijacked by hackers, saying, "All machine learning models have blind spots. Depending on how much knowledge a hacker has they can be convenient to exploit."

Endgame's team disguised known malicious software to evade next-gen AV. It monitored the responses they received from the engine and through this it was able to make many small tweaks that allowed it to become even more proficient at developing malware capable of evading security sensors.

Over the course of 15 hours and 100,000 of training, the software attempted to learn the blind spots of the next-gen antivirus scanner. In total Endgame was successful in getting 16 percent of its sample malware code past the security system.

Anderson informed attendees that the malware generating software would be available online at Endgame's Github page and he encouraged others to try it out.

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

Photo Credit: Rawpixel.com/Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.