Base AI Policy on Evidence, Not Existential Angst

Base AI Policy on Evidence, Not Existential Angst
This article argues for a pragmatic and evidence-based approach to Artificial Intelligence (AI) regulation, moving away from fear-driven, existential anxieties. The author, Martin Casado, emphasizes the need for a simpler, more reasonable framework for AI policy, especially in light of numerous AI-focused bills circulating in U.S. statehouses and the evolving federal approach.
The Problem with Current AI Policy Discourse
The current discourse surrounding AI policy is characterized by a "free-for-all proxy battle" where various anxieties about AI and technology are aired. This often devolves into polarized debates involving organizations focused on existential risk, industry groups concerned with jobs and copyright, and policymakers trying to regulate AI effectively. This cacophony can overshadow legitimate concerns about regulatory overreach, potential for regulatory capture, and the negative impact on America's economy, innovation, and global competitiveness.
The Proposed Solution: Focus on Marginal Risk
Casado proposes a straightforward and reasonable policy position: focus on marginal risk. Marginal risk refers to the new types of risks introduced by a technology that necessitate a fundamental shift in policy. The author draws a parallel to the internet, where new threats like computer worms required a change in national security posture to address vulnerability asymmetry.
By concentrating on marginal risks, policymakers can avoid "spurious regulation" and improve security by addressing the most critical issues, rather than wasting resources on ineffective policies. This approach aligns with the idea that broader policies for governing information systems have evolved over decades, with each new technological epoch raising concerns that the industry must address.
Avoiding Ineffective Policies
The article highlights past policy failures, such as attempts to regulate mathematics or implement backdoors in phones and cryptography. These approaches, it argues, are unlikely to succeed with AI unless there's a material change in marginal risk. Policies that have shown limited but positive outcomes, like export restrictions on computer chips, are also mentioned.
AI Policy Based on Reality and Evidence
Casado stresses that AI policy should be informed by learnings from previous technological eras and grounded in reality. He argues that significant policy departures should only occur after understanding the marginal risks of AI in relation to existing computer systems. Currently, the discussion around marginal risks with AI is still largely based on research questions and hypotheticals, as noted by a respected collection of AI experts.
Addressing Concerns and Real-World Impact
The author critiques the disconnect between AI concerns and reality, citing examples like the overblown fears surrounding OpenAI's GPT-2 model, which was deemed too dangerous to release but has since been surpassed by more powerful, widely used models with minimal negative impact. Similarly, fears that deepfakes would significantly influence the U.S. presidential election have not materialized.
Instead, Casado points to the tangible benefits of AI, such as safer autonomous vehicles, more accurate medical diagnoses, and advancements in creative endeavors and biotechnology. He suggests that the most beneficial policy for human welfare might be aggressive investment in AI rather than encumbering it with restrictive regulations.
Conclusion
Until a clear understanding of AI's marginal risks is established through evidence-based research, the article advocates for recognizing AI's immense potential for positive global impact. The author concludes that AI is already delivering on its promise and encourages a balanced approach that fosters innovation while addressing genuine risks.
This article originally appeared on Fortune.com.
Stay up to date on the latest from a16z Infra team
Sign up for our a16z newsletter to get analysis and news covering the latest trends reshaping AI and infrastructure.
Contributor:
- Martin Casado is a general partner at Andreessen Horowitz, leading the firm's infrastructure practice. He is a prominent voice in the venture capital and technology space, focusing on the future of AI and its implications.
More From this Contributor:
- Benchmarking AI Agents on Full-Stack Coding
- Agent Experience: Building an Open Web for the AI Era
- DeepSeek: America's Sputnik Moment for AI?
- From NLP to LLMs: The Quest for a Reliable Chatbot
- Investing in WaveForms AI
Recommended for You:
- Advancing Open Source AI Through Benchmarks and Bold Experimentation
- From Demos to Deals: Insights for Building in Enterprise AI
- Next-Gen Pentesting: AI Empowers the Good Guys
- In Consumer AI, Momentum Is the Moat
- How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025
Image:
Disclaimer: The views expressed are those of the individual author and not necessarily those of a16z. Information is for informational purposes only and should not be relied upon as legal, business, investment, or tax advice. Past performance is not indicative of future results.
Original article available at: https://a16z.com/base-ai-policy-on-evidence-not-existential-angst/