Amazon Bolsters AI Security with Private Bug Bounty Program for Nova Models
Amazon has launched a new private bug bounty program specifically targeting its advanced AI models, including the foundational Nova models. This initiative aims to proactively identify and fix potential security vulnerabilities by collaborating with security researchers from leading universities. The program complements Amazon's existing public bug bounty efforts, which have already led to the discovery and remediation of numerous issues.
Key Takeaways
- Amazon introduces a private, invite-only AI bug bounty program for its Nova models.
- The program seeks to enhance AI security through collaboration with academic and professional security researchers.
- Focus areas include prompt injection, model vulnerabilities, and preventing unintentional assistance in harmful activities.
- Monetary rewards ranging from $200 to $25,000 are available for qualified participants.
Strengthening AI Defenses
Amazon's new private AI bug bounty program is designed to foster a collaborative approach to AI security. By engaging with external security experts, Amazon aims to uncover vulnerabilities that might be missed during internal testing. Rohit Prasad, SVP of Artificial General Intelligence at Amazon, emphasized the importance of community partnership in making AI models stronger and more secure, highlighting a commitment to safety and transparency.
Collaboration and Real-World Testing
The program kicked off with a live event at Amazon's Austin office, bringing together university teams and professional researchers. This event served as a platform for collaboration on real-world AI security challenges. Hudson Thrift, CISO of Amazon Stores, described security researchers as crucial "real-world validators" who rigorously test AI systems.
Program Focus Areas
Participants in the bug bounty program are encouraged to investigate several critical areas:
- Prompt injection and jailbreaks with security implications.
- Model vulnerabilities that have the potential for real-world exploitation.
- Identifying ways models might inadvertently aid in harmful activities, including security threats and Chemical, Biological, Radiological, and Nuclear (CBRN) events.
Eligibility and Future Participation
While the initial event was held in November, broader participation in the continuous private program will be available by invitation in early 2026. Security researchers and select academic teams will be eligible. For those outside the private program, Amazon's public bug bounty program remains open for reporting potential security issues in Amazon AI applications, including "Gen AI Apps."
The Importance of AI Safety
As Amazon's Nova models are integrated into various products and services, such as Alexa and AWS via Amazon Bedrock, ensuring their security is paramount. This initiative underscores Amazon's belief that AI safety progresses most effectively through the combined efforts of academia and the professional security community. By providing hands-on learning opportunities, Amazon also aims to cultivate the next generation of AI security researchers.