After the release of its most powerful AI model, Gemini 2.5 Pro, Google published a technical report giving details of its internal security assessment. However, experts have expressed disappointment over the lack of reports of the report, making it difficult to assess the potential risks of the model. While technical reports are seen as required for transparency, Google’s approach has raised questions about its commitment to fully safety testing.
The report releases key details including any mention of Google’s Frontier Safety Framework (FSF), which was introduced to identify AI capabilities last year that could lead to severe damage. This absence has left experts such as the institute’s co-founder Peter Wildford for the AI policy and strategy, who question the company’s efforts to ensure the safety of their models. Wildford said, “It is impossible to verify if Google is living for its public commitments.”
While some experts admitted that Google had released at least one report, they were still concerned about its lack of timeliness. Google’s previous security evaluation is often delayed, and many feel that the company is not giving priority to the release of significant security conclusions. Thomas Woodside, co-founder of The Secure AI Project, said Google’s final dangerous capacity tests were published months after the model was released, which raises concerns about transparency.
By adding to frustration, Google has not yet published a security report for Gemini 2.5 flash, a small version of the model. A spokesperson mentioned that a report for flash is “coming soon”, but experts are eager to see more frequent updates, especially for models that have not yet been publicly deployed. These delays suggest that Google may be less than its promises of providing safety evaluation on time.
Finally, while Google has progressed in releasing security reports for its AI model, rare details and delays increase important concerns in providing comprehensive evaluation. Experts are calling for more transparency and timely updates to ensure responsible development and deployment of AI technologies. As the competition in AI increases, there is also pressure on companies like Google to prioritize security and accountability.