Meta Patches Bug Exposing User AI Prompts and Responses

Meta Fixes Critical Bug Exposing User AI Prompts and Responses
Meta has successfully patched a significant security vulnerability within its AI chatbot service that could have allowed unauthorized users to access private prompts and AI-generated content from other users. The discovery and subsequent fix highlight ongoing challenges in securing AI technologies amidst rapid development and deployment.
Discovery of the Vulnerability
The bug was identified by Sandeep Hodkasia, founder of the security testing firm AppSecure. Hodkasia privately disclosed the vulnerability to Meta on December 26, 2024. He was subsequently awarded a $10,000 bug bounty by Meta for his efforts in identifying and reporting the flaw.
How the Bug Worked
Hodkasia's investigation into Meta AI's functionality revealed that when users edited their AI prompts to regenerate text or images, Meta's backend servers assigned a unique identifier to each prompt and its corresponding AI-generated response. By analyzing network traffic, Hodkasia found he could manipulate this unique number. This manipulation allowed him to access prompts and responses belonging to entirely different users.
Essentially, Meta's servers were failing to adequately verify if the user requesting a specific prompt and response was authorized to view it. Hodkasia noted that these prompt numbers were "easily guessable," suggesting a potential avenue for malicious actors to automate the scraping of users' original prompts using specialized tools.
Meta's Response and Fix
Meta confirmed the existence of the bug and its subsequent resolution. A spokesperson for Meta, Ryan Daniels, stated that the company fixed the issue in January 2025. Crucially, Meta reported finding "no evidence of abuse" related to the vulnerability and acknowledged rewarding the researcher who discovered it.
Broader Context: AI Security and Privacy Risks
This incident occurs at a time when major technology companies are racing to launch and enhance their AI offerings. However, these advancements are frequently accompanied by significant security and privacy concerns. The vulnerability underscores the complex landscape of ensuring user data protection in the rapidly evolving field of artificial intelligence.
Meta AI's standalone app, launched earlier in 2025 to compete with platforms like ChatGPT, has faced its own set of privacy challenges. Previously, some users inadvertently shared conversations they believed to be private with the chatbot publicly, indicating a broader need for robust privacy controls and user education in AI applications.
Implications for the AI Industry
The Meta AI bug serves as a stark reminder of the critical importance of security testing and responsible disclosure in the AI domain. As AI technologies become more integrated into daily life, ensuring the privacy and security of user data is paramount. The incident highlights the need for continuous vigilance, rigorous security audits, and prompt remediation of vulnerabilities to maintain user trust and prevent potential misuse of sensitive information.
Key Takeaways:
- Vulnerability: A bug in Meta AI allowed users to view others' private prompts and AI-generated content.
- Discovery: Found by security researcher Sandeep Hodkasia.
- Reward: Hodkasia received a $10,000 bug bounty from Meta.
- Fix: Meta deployed a patch in January 2025.
- Exploitation: Meta found no evidence of malicious exploitation.
- Mechanism: The bug involved easily guessable unique identifiers for prompts and responses.
- Context: Highlights ongoing AI security and privacy challenges.
This event emphasizes the ongoing need for robust security measures and transparent practices as AI technology continues to advance.
Original article available at: https://techcrunch.com/2025/07/15/meta-fixes-bug-that-could-leak-users-ai-prompts-and-generated-content/