Experts Criticize Google's Gemini 2.5 Pro AI Safety Report for Lacking Details

Google's Latest AI Model Report Lacks Key Safety Details, Experts Say
On Thursday, April 17, 2025, weeks after launching its most powerful AI model yet, Gemini 2.5 Pro, Google published a technical report detailing the results of its internal safety evaluations. However, experts have expressed disappointment, stating that the report is light on crucial details, making it difficult to ascertain the potential risks posed by the model.
The Importance of Technical Reports in AI
Technical reports are vital for the AI community, providing valuable, and sometimes unflattering, information about AI models that companies may not widely advertise. These reports are generally seen as good-faith efforts to support independent research and safety evaluations.
Google's Approach to AI Safety Reporting
Google employs a distinct safety reporting strategy compared to some of its AI competitors. The company typically publishes technical reports only after a model has progressed beyond the "experimental" stage. Furthermore, findings from "dangerous capability" evaluations are reserved for a separate audit, rather than being included in these technical write-ups.
Expert Concerns Regarding Gemini 2.5 Pro Report
Several experts interviewed by TechCrunch voiced their dissatisfaction with the sparsity of the Gemini 2.5 Pro report. A key point of contention is the report's limited mention of Google's proposed Frontier Safety Framework (FSF), introduced the previous year to identify future AI capabilities that could cause "severe harm."
Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, criticized the report for being "very sparse, contains minimal information, and came out weeks after the model was already made available to the public." He emphasized the difficulty in verifying Google's public commitments and assessing the safety and security of their models due to this lack of detail.
Thomas Woodside, co-founder of the Secure AI Project, while acknowledging the release of the report, expressed skepticism about Google's commitment to timely supplemental safety evaluations. He noted that Google's last publication of dangerous capability test results was in June 2024, for a model announced in February of the same year.
Lack of Transparency for Gemini 2.5 Flash
Adding to the concerns, Google has not yet released a report for Gemini 2.5 Flash, a smaller and more efficient model announced the previous week. A spokesperson indicated that a report for Flash is "coming soon."
Industry-Wide Transparency Issues
Google is not alone in facing accusations of underdelivering on transparency. Meta's safety evaluation for its new Llama 4 open models was similarly described as "skimpy." OpenAI also opted not to publish a report for its GPT-4.1 series, focusing instead on coding capabilities.
Google's Commitments to Regulators
These transparency issues loom large for Google, especially given its assurances to regulators. Two years prior, Google informed the U.S. government that it would publish safety reports for all "significant" public AI models "within scope." Similar commitments were made to other countries, pledging "public transparency" around AI products.
Kevin Bankston, a senior advisor on AI governance at the Center for Democracy and Technology, characterized the trend of sporadic and vague reports as a "race to the bottom" on AI safety. He highlighted that competing labs are reducing safety testing times, making Google's "meager documentation" for its top AI model a "troubling story."
Google's Response
In response, Google has stated that while not detailed in its technical reports, it conducts safety testing and "adversarial red teaming" for models prior to their release.
An update on April 22nd modified language regarding the technical report's reference to Google's FSF.
Key Takeaways:
- Lack of Detail: Experts criticize Google's Gemini 2.5 Pro safety report for its sparsity and lack of crucial details.
- Transparency Concerns: The report's release weeks after the model's availability raises questions about timely transparency.
- FSF Omission: The limited mention of Google's Frontier Safety Framework (FSF) is a point of concern.
- Industry Trend: Similar transparency issues are observed with other major AI companies like Meta and OpenAI.
- Regulatory Promises: Google's past commitments to regulators regarding AI safety reporting are being scrutinized.
- "Race to the Bottom": The trend towards less detailed reports is seen as a decline in AI safety and transparency standards.
- Google's Defense: Google maintains that it conducts rigorous safety testing and red teaming before model releases.
Topics Covered:
- AI
- Gemini
Related Articles:
- Marc Andreessen reportedly told group chat that universities will “pay the price” for DEI
- Windsurf’s CEO goes to Google; OpenAI’s acquisition falls apart
- Grok 4 seems to consult Elon Musk to answer controversial questions
- AWS is launching an AI agent marketplace next week with Anthropic as a partner
- Elon Musk’s xAI launches Grok 4 alongside a $300 monthly subscription
- YouTube prepares crackdown on “mass-produced” and “repetitive” videos, as concern over AI slop grows
- Perplexity launches Comet, an AI-powered web browser
Event Promotion:
- TechCrunch All Stage Pass: Save up to $475 on passes for the event in Boston on July 15, 2025. The event focuses on strategies, workshops, and connections for founders and VCs across all stages.
Social Media Links:
- X (Twitter)
- YouTube
- Threads
- Bluesky
- Mastodon
Company Information:
- About TechCrunch
- Contact Us
- Advertise
- Crunchboard Jobs
- Site Map
Legal & Policies:
- Terms of Service
- Privacy Policy
- RSS Terms of Use
- Code of Conduct
© 2025 TechCrunch Media LLC.
Original article available at: https://techcrunch.com/2025/04/17/googles-latest-ai-model-report-lacks-key-safety-details-experts-say/