Former OpenAI Engineer Reveals Culture, Chaos, and AI Safety Focus

A Former OpenAI Engineer's Candid Reflections on Life at the AI Giant
Calvin French-Owen, a former engineer at OpenAI, recently resigned after a year with the company, during which he worked on significant projects like Codex, OpenAI's AI coding agent. He published a detailed blog post reflecting on his experience, offering insights into the company's culture, rapid growth, operational challenges, and approach to AI safety.
Key Takeaways from French-Owen's Experience:
French-Owen clarified that his departure was not due to internal "drama" but a personal desire to return to startup founding. He previously co-founded the customer data startup Segment, which was acquired by Twilio for $3.2 billion in 2020.
1. Rapid Growth and Scaling Challenges:
- Exponential Expansion: OpenAI experienced massive growth during French-Owen's tenure, expanding from 1,000 to 3,000 employees in just one year.
- Product Success: This growth is fueled by the immense success of products like ChatGPT, which reportedly reached over 500 million active users quickly.
- Operational Chaos: Such rapid scaling inevitably leads to operational challenges. French-Owen noted that communication, reporting structures, product shipping, management, organization, and hiring processes all break down or become strained.
2. Startup-like Culture Amidst Scale:
- Empowerment and Red Tape: Despite its size, OpenAI retains a startup-like environment where employees are empowered to act on their ideas with minimal bureaucracy.
- Duplicated Efforts: This autonomy, however, leads to duplicated efforts across teams, with French-Owen citing examples like multiple teams developing similar libraries for queue management or agent loops.
- Codebase Issues: The engineering environment faces challenges due to varying skill levels among engineers, from seasoned veterans to new PhDs. The central code repository, described as a "back-end monolith," is prone to breakages and slow performance, though management is aware and working on improvements.
3. The "Launching Spirit" and Product Development:
- Meta-like Pace: OpenAI operates with a "move-fast-and-break-things" mentality, reminiscent of early Facebook. Many hires also come from Meta.
- Codex Development: French-Owen's senior team of eight engineers, four researchers, two designers, two go-to-market staff, and a product manager successfully built and launched Codex in an intense seven-week period with significant sleep deprivation.
- Immediate Impact: The launch of Codex saw immediate user uptake simply by its integration into ChatGPT's interface, demonstrating the power of the platform.
4. Secrecy and External Scrutiny:
- Controlled Information: OpenAI maintains a culture of secrecy to prevent leaks, given the intense public scrutiny it faces.
- Social Media Monitoring: The company actively monitors social media platforms like X (formerly Twitter) for viral posts and potential responses.
- "Twitter Vibes": French-Owen humorously noted that the company's operations seem influenced by "twitter vibes."
5. AI Safety: Misconceptions vs. Reality:
- Debunking Misconceptions: French-Owen addressed the common misconception that OpenAI is not sufficiently concerned about AI safety.
- Practical Safety Focus: While acknowledging theoretical long-term risks discussed by doomsayers and former employees, the internal focus is primarily on practical safety issues such as hate speech, abuse, political bias, crafting bio-weapons, self-harm, and prompt injection.
- Awareness of Impact: OpenAI is aware that its LLMs are used by hundreds of millions for critical tasks, including medical advice and therapy.
- High Stakes Environment: The company operates under the awareness that governments, competitors, and former employees are closely watching its progress and safety practices, creating a high-stakes environment.
French-Owen's reflections provide a valuable, grounded perspective on the realities of working at a leading AI company navigating rapid growth, complex technical challenges, and the critical domain of AI safety.
Original article available at: https://techcrunch.com/2025/07/15/a-former-openai-engineer-describes-what-its-really-like-to-work-there/