OpenAI launches bug bounty program to help boost ChatGPT security

OpenAI

As the world goes crazy for AI, many are voicing concerns about the numerous artificial intelligence systems that are rapidly gathering fans. ChatGPT is one of the tools that has exploded in popularity, and now OpenAI, the company behind the system, has launched a bug bounty program to help track down flaws and problems.

The company is calling on "the global community of security researchers, ethical hackers, and technology enthusiasts" to unearth vulnerabilities, bugs and security flaws. With the OpenAI Bug Bounty Program, it is possible to earn anything from $200 to $20,000 for sharing discoveries, with the size of the payment being dependent on the severity of the problem found.

See also:

OpenAI says that it is launching the bug bounty program as part of its commitment to develop safe and advanced AI. The company says that while "we invest heavily in research and engineering to ensure our AI systems are safe and secure", it recognises that "as with any complex technology, [...] vulnerabilities and flaws can emerge".

It goes on to add:

The OpenAI Bug Bounty Program is a way for us to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure. We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone.

Sharing details of the incentives and rewards on offer, OpenAI says:

To incentivize testing and as a token of our appreciation, we will be offering cash rewards based on the severity and impact of the reported issues. Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries. We recognize the importance of your contributions and are committed to acknowledging your efforts.

Before you start bug-hunting, there are some things to keep in mind. OpenAI explains some of the things which are not covered by the bug bounty program and which will therefore not result in payouts:

OpenAI is committed to making AI safe and useful for everyone. Before releasing a new system, we thoroughly test it, get expert feedback, improve its behavior, and set up safety measures. While we work hard to prevent risks, we can't predict every way people will use or misuse our technology in the real world.

Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. Addressing these issues often involves substantial research and a broader approach. To ensure that these concerns are properly addressed, please report them using the appropriate form, rather than submitting them through the bug bounty program. Reporting them in the right place allows our researchers to use these reports to improve the model.

Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded unless they have an additional directly verifiable security impact on an in-scope service (described below).

Examples of safety issues which are out of scope:

- Jailbreaks/Safety Bypasses (e.g. DAN and related prompts)

- Getting the model to say bad things to you

- Getting the model to tell you how to do bad things

- Getting the model to write malicious code for you

Model Hallucinations are also out of scope:

- Getting the model to pretend to do bad things

- Getting the model to pretend to give you answers to secrets

- Getting the model to pretend to be a computer and execute code

None of these issues may be reported through bugcrowd.
None of these issues will receive a monetary reward.

More information about the program is available on the OpenAI page on bugcrowd.

Image credit: rarrarorro / depositphotos

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.