Google Removes AI Bans on Weapons and Surveillance Tech in Principle Update
Google Revises AI Principles, Removing Stances Against Weapons and Surveillance Tech
Google has significantly altered its AI Principles, a move that has drawn considerable attention and concern. The company has removed specific commitments that previously stated it would not "design or deploy" AI for use in weapons or surveillance technologies. This change, first reported by The Washington Post, marks a substantial shift from its original 2018 guidelines.
Key Changes to AI Principles:
- Removal of "Applications We Will Not Pursue": The section explicitly barring AI for weapons and surveillance has been removed from the current version of Google's AI Principles.
- Introduction of "Responsible Development and Deployment": The updated principles now emphasize a broader commitment to "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights."
Historical Context and Previous Stances:
Previously, Google had made more specific commitments:
- Weapons: The company stated it would not design AI for use in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."
- Surveillance: Google had pledged not to develop technology that violates "internationally accepted norms."
These earlier principles were established in the wake of the controversial Project Maven in 2018, a government contract involving AI for analyzing drone footage. The project led to significant employee backlash, including resignations and petitions, prompting Google to publish its initial AI Principles.
Google's Rationale for the Change:
In a blog post, DeepMind CEO Demis Hassabis and Senior Vice President of Research, Labs, Technology and Society at Google, James Manyika, explained that AI's evolution into a "general-purpose technology" necessitated a policy update.
They stated, "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."
They further elaborated that Google will "continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights
ā always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks."
Evolution of Google's Military Contracts:
Despite the initial stance against military applications, Google's involvement in military contracts has evolved:
- 2021: The company reportedly made an "aggressive" bid for the Pentagon's Joint Warfighting Cloud Capability cloud contract.
- Early 2025: Reports indicated that Google employees had collaborated with Israel's Defense Ministry to expand the government's use of AI tools.
Image:
Conclusion:
The revision of Google's AI Principles signifies a notable shift in the company's approach to AI development, particularly concerning its application in sensitive areas like defense and surveillance. The broader language aims to balance innovation with responsibility, though the specific implications of this change remain to be seen.
Original article available at: https://www.engadget.com/ai/google-now-thinks-its-ok-to-use-ai-for-weapons-and-surveillance-224824373.html