According to Wired, at least two employees on OpenAI’s economic research team have departed in recent months, including Tom Cunningham who left entirely in September. Cunningham reportedly wrote in an internal parting message that the team faced a growing tension between rigorous analysis and functioning as an advocacy arm for the company. The report, citing four anonymous sources, alleges OpenAI has become more reluctant over the past year to release work highlighting economic downsides like job displacement, favoring positive findings instead. OpenAI chief strategy officer Jason Kwon addressed the concerns in an internal memo, arguing the company must act as a “responsible leader” and not just raise problems but “build the solutions.” The company spokesperson pointed to the 2023 hiring of chief economist Aaron Chatterji and an expanded research scope aimed at understanding AI’s societal impacts.
The Inevitable Tension
Here’s the thing: this isn’t really a surprise, is it? OpenAI is no longer a plucky non-profit research lab. It’s a multi-billion dollar corporation with massive partnerships with Microsoft, governments, and other enterprises. Its valuation is astronomical. So of course there’s going to be internal pressure to frame the AI narrative in a way that supports its business model and growth trajectory. The tension Tom Cunningham described—between being a rigorous research institution and a “de facto advocacy arm”—was basically inevitable the moment OpenAI took that first giant investment check. You can’t simultaneously be the world’s leading vendor of a disruptive technology and also be its most prominent, impartial critic. The incentives just don’t align.
The Subtle But Important Messaging Shift
Look at the language from Jason Kwon’s internal memo. He says because OpenAI is “the leading actor” that puts AI into the world, it’s “expected to take agency for the outcomes.” That sounds responsible. But it’s also a profound shift from a pure research mindset. It subtly moves the goalposts from “understand and disclose all potential impacts” to “understand and then manage the narrative around the impacts we are causing.” Building solutions is important, sure. But if you start soft-pedaling the problems in your public research—like, say, the potential for massive labor market disruption—then you’re not setting the stage for genuine solutions. You’re engaging in PR. And when the leading source of information on a technology’s risks is the company selling it, we’ve got a serious transparency problem.
Why This Isn’t Just Internal Drama
This matters because policymakers, academics, and the public rely on this research to understand what’s coming. OpenAI’s 2023 paper “GPTs Are GPTs” was widely cited because it provided concrete data on automation vulnerability. If that stream of clear-eyed analysis dries up or becomes overly sanitized, we’re all flying blind. We’re left with corporate talking points on one side and speculative doom-mongering on the other. The middle ground of credible, nuanced research gets hollowed out. And in a field moving this fast, that’s dangerous. How can we possibly craft sensible regulations or build effective safety nets if we’re not getting the full picture from the primary source?
Part of a Broader Pattern
Let’s be real, this fits a pattern we’ve seen across the tech industry for decades. A company starts with lofty, open ideals. Then, as the commercial stakes get higher, the walls go up. Research becomes R&D. Transparency becomes a liability. It happened in social media, it’s happened in other areas of tech, and now it’s happening in the most important technological shift in a generation. OpenAI’s promise of “broadly distributed” benefits is being tested against the hard reality of being a for-profit “leading actor.” The departure of researchers who call this out is a canary in the coal mine. It suggests the company’s internal culture is prioritizing advocacy, and that means the rest of us need to be more skeptical than ever about the information it chooses to release.
