OpenAI, the owners of ChatGPT, has announced that users who report bugs in the system would be awarded up to $20,000 in the OpenAI Bug Bounty Programme. On Tuesday, the business announced the new programme will pay customers between $200 and $20,000 for discovering software bugs in ChatGPT, OpenAI plugins, the OpenAI API, and other associated services.
“We are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems,” The company said, adding that it will be offering cash rewards in its bug bounty programme, based on the severity and impact of the reported issues.
“Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries,” it said.
Recognizing that vulnerabilities and flaws in the complicated technology can arise, the American corporation announced a partnership with the bug bounty programme platform, Bugcrowd to streamline the submission and payout procedure. OpenAI added, “We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”
The corporation has also established guidelines and standards of engagement for what will not be rewarded. Among these are having the AI model “say bad things to you” and “write malicious code.” OpenAI advised users who detect bugs to promptly and unequivocally report it. In its rules of engagement for the bug bounty programme, it stated, “Do not engage in extortion, threats, or other tactics to elicit a response under duress,”
The company’s introduction of the bug bounty programme follows reports of potential data breaches and privacy issues associated with the employment of the AI chatbot.
Did Italy’s ChatGPT Ban Give Rise to the Bug Bounty Programme?
A recent ban in Italy might have given rise to the OpenAI bug bounty programme. Last month, ChatGPT was prohibited in Italy. According to officials, the AI platform would be examined for how it protects user data, especially that of children. The General Data Protection Regulation (GDPR) compliance of OpenAI’s chatbot would be investigated by Garante, the Italian data protection body, in addition to being blocked. The GDPR sets rules for the collection, use, and storage of personal data.
After the ban in Italy, OpenAI committed to be more upfront about how it manages user data and confirms the user’s age. The business submitted a document to the Italian data protection authority, Garante outlining the steps it would take in response to its requirements. Many believe that this bug bounty programme is one of such steps.
The business in a blog post titled “Our approach to AI safety” said that it was working to build nuanced policies against behaviour that represents a genuine risk to people. According to the corporation, it has removed personal information from its datasets when possible, fine-tuned models to refuse user prompts requesting such information, and will respond to individual requests to delete their data from its systems.
The Italian ban has likely brought about the bug bounty programme, and has aroused the interest of regulators across the world, who are investigating if stricter measures are required for chatbots and whether such steps should be coordinated. Data protection authorities in Germany, France, and Ireland have all stated that they are investigating the reasoning behind ChatGPT’s ban in Italy.