A Conversation with Karen Hao
- Sara Arjomand
- Sep 13
- 16 min read
Karen Hao is an award-winning reporter and the best-selling author of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Hao sits down with Sara Arjomand '26 to discuss the book, Open AI's troubled history, and the race towards AGI.
This interview transcript has been edited for length and clarity.

Sara Arjomand: Karen Hao is a best-selling author and award-winning reporter. She writes about artificial intelligence and was the first journalist to profile OpenAI. Her book Empire of AI was published in May of this year. It's a fascinating look at the AI industry and OpenAI in particular, as well as at some of the industry's more eccentric characters. Karen Hao, welcome to the podcast!
Karen Hao: Thank you so much for having me.
Sara Arjomand: So, the title of your book is Empire of AI. Why “empire”?
Karen Hao: Yeah, so I make this argument in the book that we really need to start thinking of tech companies like OpenAI as new forms of empire, because they aren't just operating as businesses that are providing us products and services anymore. They're also actively terraforming our Earth, reshaping our politics and geopolitics. They're developing a controlling influence on our professional lives, our social lives, and many, many different facets of society.
And they, of course, have also consolidated an extraordinary amount of political and economic power. So to call them just a business is really insufficient for encapsulating every different way that they operate and that they are exerting influence on our lives.
I wanted people to grapple with just the sheer scope and scale of what these entities have become, and to realize that they're actually more powerful than pretty much any nation state in the world now. If you recognize that they are empires, then you can also more effectively conclude the consequences of continuing to allow them to operate in this way. Empires are antithetical to democracy, and allowing these companies to just do whatever they want, have access to embedded resources, over time will lead to the erosion of our foundation of democracy.
Sara Arjomand: Okay, we'll talk more about that in a bit. But I suppose—you were the first person to profile OpenAI, the first journalist who was let inside its doors. Can you tell me about how that trip to San Francisco came to be?
Karen Hao: Yeah, so I was a reporter at MIT Technology Review at the time, which is a publication that specializes in emerging technologies and, back then, was very focused on fundamental research—like pre-commercialization technologies. I was covering AI, looking at the fundamental research coming mostly out of academia at the time, but also a little bit out of industry.
OpenAI came on my radar because they were conceptualized as a fundamental nonprofit research lab, not as a company that was meant to create consumer products like ChatGPT. In 2019, OpenAI started having some degree of pivot away from its nonprofit roots towards a more commercial orientation. Sam Altman became the CEO officially at that time, Microsoft invested a billion dollars within the company, and it felt to me that this organization that was already having some kind of influence on the way that AI was being developed could one day also end up having a lot of influence on the way AI was introduced to the public, and the way that the public would come to understand what this technology is, and all of these changes that were happening would affect that.
So I just proposed to the company: you know my work, you know that I understand AI research really well, it seems like you're changing a lot as an organization, and you might want to re-introduce yourself to the public. They really liked that idea at the time, so they agreed to let me go embed for three days within the organization and work on my profile.
But through the course of reporting the profile, I ended up coming to conclusions that they really didn't like, and ultimately they refused to talk to me for three years after that.
Sara Arjomand: Right. It's interesting that they kind of welcomed you in the first moment, knowing what they were doing behind closed doors. Why do you think the company decided to give you access, knowing ahead of time that you were bound by journalistic duty to call it like you see it?
Karen Hao: I think there's kind of two ways to answer that question. One is that a lot of people in the tech industry, surprisingly, do not really understand how journalism works. There are a lot of problems with access journalism and the games that the tech industry will play to dangle carrots in front of journalists to entice them into following more of the company narrative so that they can continue getting access moving forward.
But separately from that, a lot of companies in Silicon Valley—and OpenAI in particular—engage in a lot of self-delusion. They don't actually see themselves in the same way that the average public member might see what they're doing. So I don't think they felt to the same degree that I did, representing the public voice, that they were engaging in strange behavior and that there was a disconnect behind closed doors.
I won't say there was zero awareness, because, of course, at the end of the day there were people I interviewed who pointed this disconnect out, and that's part of the reason why I started noticing it more myself. But by and large, especially among the leadership—the ones who made the decision to let me in—I think they had a story they told themselves about how they were extremely mission-oriented and aligned with their mission.
Sara Arjomand: Yeah, let's talk a little bit about that leadership. There's this popular notion of a “tech bro,” a person within whom nerdiness and superciliousness are paradoxically consummated. What role does ego—and I'm thinking here of people like Sam Altman and Elon Musk—play in the OpenAI story?
Karen Hao: Yeah, I think to understand the AI industry today is really to understand it as a story of ego, profit, and ideology. It’s really a mix of these three things. When Sam Altman and Elon Musk first co-founded OpenAI, in hindsight it’s so obvious that it was an egotistical project, but in the moment—it was a different time.
It was the end of 2015, when Cambridge Analytica hadn’t happened yet, and there hadn’t yet been a backlash against the tech industry. So people more naturally believed—or suspended their disbelief—around the possibility of these tech titans fundamentally being altruistic and doing things for good.
The reason why they initially started the organization was because they were upset that Google was creating a monopoly on AI research, and therefore having a dominant influence on AI development. And it was Google, not them. So much of the way the industry has continued to operate since then is just that—it’s tech bros being frustrated or motivated by the idea of wanting to reshape what they see as a profoundly consequential technology in their own image.
Sara Arjomand: So you've written a lot about these AI luminaries, the big names—people like Altman and Musk—but I want to know, what is the median employee at a place like OpenAI like? Can you speak to their values? How does the average worker conceive of their place within the empire, and do they see it as an empire at all?
Karen Hao: It's a great question. Employees at OpenAI in particular are—I wouldn’t say they’re so much of an outlier, but there’s something quite interesting about the fact that OpenAI conceptualizes itself as a mission-driven nonprofit, that ends up attracting a certain kind of person and also creates a certain kind of motivation for why people—a certain narrative that people tell themselves about why they’re doing what they’re doing.
When I used to cover Facebook, when I talked with employees, they were quite clear that they were working for a business, and that ultimately—even if they really wanted to do the right thing—they understood that the bottom line superseded the right thing. They knew that they were entering into that constrained environment and they were cognizant of their limitations and just trying to navigate within those limitations.
Whereas employees at OpenAI often do not feel that way. They don’t feel like they’re working at a pure business. They feel like there is still something different about OpenAI because there’s still a nonprofit entity that is governing the for-profit, and they feel that there's a more of a pure-hearted goal that they could achieve by being at the company.
But, then again, it also really depends on when the employee arrived at Open AI. More early day employees that arrived when it was still a nonprofit, I think, feel more strongly about that, whereas employees who’ve arrived more recently—when now OpenAI is going to potentially be valued at $500 billion dollars—they’re no longer under that false pretense. They’re sort of joining thinking that they’re just at a hyper-scaling startup, and they’re building products and they’re changing the world in the way social media companies change the world.
So the values of these employees are quite across the board. But I think generally, they do see themselves as good people—no one ever sees themselves as the bad character. And so I don’t really think most of them see these companies as empires. Although I was surprised after I published my book that there were a number of people within the company and in the industry that did reach out to me and say that after hearing the argument, it was hard for them to argue against it, but that they had never arrived at that conclusion themselves before.
Sara Arjomand: Hmm. I mean, I know there's some overlap between the tech space and the effective altruism community. And so, you know, Oxford philosopher Nick Bostrom was among the first to sound the alarm about the risks of unaligned AI. And his 2014 book Super Intelligence got Musk kind of obsessed with the issue—you talk about this in your book. So to what extent did EA-type fears of existential risk motivate that slide into the more profit-oriented, commercial model? Can you discuss the impact of “ends justify the means” reasoning?
Karen Hao: That's such a good question. Yeah, so the effective altruism community would say themselves that they have tried everything possible to prevent the slide from a more idealistic nonprofit to this for-profit-driven corporation. But yeah, exactly what you articulated in your question—I concluded by the end of my reporting that they actually worked hand in hand, inadvertently, with more accelerationist type people—who were more clearly aligned with “Yes, we want to just build this technology as quickly as possible and release it”—the EAs worked hand-in-hand with them to pave the way for that transformation.
And a lot of it was, I think, because EAs believe that AI is existentially risky—that that is paramount to any other kind of challenge. And a faction of the EAs then think, like, the conclusion, therefore, is to try and accelerate the development of the technology as quickly as possible, so that they can maintain control over it, instead of having a bad actor arrive at it first. Because if the bad actor arrives there, then, like, everyone in humanity might die.
And so in a weird way, they twisted themselves into this logical pretzel where they did exactly what they said that they shouldn't be doing, but always in the context of—your point—the ends justifying the means.
Sara Arjomand: So, in 2019, around the time that you started covering open AI, they added OpenAI LP, this “capped profit” arm that, according to a statement by OpenAI, would allow them to rapidly increase their investments in compute and talent. And how would you explain that decision and their motivations for adding this for-profit arm?
Karen Hao: I think one way you could explain it is that, given that Musk and Altman were engaged from the very beginning in an ego driven project, and there was this desire to compete and be the group that became the greatest influence on AI development, that it was sort of a natural extension of this original goal—that they realized quickly that in order to do that, in order to be number one, they had to take a, what I call an “intellectually cheap approach.” This approach of: “We're just going to take existing techniques from the field, and we're just going to pump a historic amount of data into training these models and use super computers larger than anyone has ever seen before,” rather than doing actual, real fundamental AI research and breakthroughs, which is more difficult to control in terms of a time scale.
It's harder for you to guarantee that you will be first when research breakthrough research is—you never know when a breakthrough is going to arrive, but you do know, if you have a lot of cash and you can build really massive computers, how to pace out your research such that you do end up crossing the finish line first. And so, once they made that decision, then the bottleneck became: how do we get as much cash as possible so that we can build the biggest supercomputer possible? And then, from there, it became obvious that they should create a for-profit entity.
Sara Arjomand: So, we've heard a little bit about a few of the imbroglios in which OpenAI has found itself over the years. And there's another one that I'm kind of interested in—this Anthropic split. So, Anthropic was founded in 2021 by seven employees of OpenAI, including their former Vice President of Research. So, how do you think about these other corporations—competitors on the scene like Anthropic?
Karen Hao: Yeah, it's all egos that are determining that they are the ones that should be actually dominating AI development. And essentially, through the course of OpenAI’s history, almost every single senior leader has splintered off to form their own competitors. So, not just Anthropic. Obviously, Musk then left and formed xAI, Mira Murati left and formed Thinking Machines Lab, and Ilya Sutskever left to form Safe Superintelligence.
But I think all of these different people, they basically—there were two things happening.
One was that they explicitly were frustrated with Sam Altman, so there were, like, interpersonal clashes, where they didn't like his leadership and felt that he was really, really difficult to work with. And also visionary clashes—they disagreed on some fundamental level about “How, what is the most responsible way to actually develop AI and introduce it into the world?” And so Anthropic, the Anthropic Split was exactly the same thing—where a group within Open AI that was more AI safety-oriented, more existential risk-oriented, just felt that both Sam was untrustworthy and that they would do it better. And so they left to found their own organization.
Sara Arjomand: Right. And so, Open AI and Anthropic—these other corporations—I mean, many of them were essentially founded with this intention, like you say, of preventing dystopia. And of course, you can't look inside the hearts and minds of these individuals, but I mean, what do you suspect that the heads of these companies think they're doing now?
Karen Hao: I think they are still engaged in an “ends justifying the means” argument. I don't think that you could successfully run one of these companies long term without having that kind of delusional thinking. Because you can't, you can't wake up every day thinking that you're somehow doing something, like, fundamentally bad for the world.
You perpetuate yourself and motivate an organization by creating a coherent logic within your mind about why you are doing the best possible thing with your time and for the world.
And I think they—even though, you know, from an outside observer's perspective, you look at all these companies, and you're like, “All you're doing is just accelerating this race for AI development in extremely dangerous ways,” not in the X-risk type way, but in terms of now: the impacts that we're seeing on mental health, some leading to devastating consequences, environmental consequences, labor consequences.
And yet they still have this steadfast hold on that internal logic of: “But it's all worth it, because we're going to either reach utopia or prevent dystopia in the end.”
Sara Arjomand: Can you tell us a little bit more about some of those shorter term consequences? I think a lot of people who are familiar with anti-AI or pessimistic arguments—their head immediately goes to the kind of sci-fi, dystopian, existential risk place. But you're interested in these more environmental consequences, algorithmic bias. Can you tell us about those factors?
Karen Hao: Yeah. So, basically, the core critique that I have with AI development from Silicon Valley is that they take this “scaling at all costs” approach to AI development, where they're just throwing more and more data and building larger and larger supercomputers. And so, if you just look at the consequences that stem from the amount of data that they're consolidating and the level of supercomputers they're building, you already have, you know, a giant list of consequences. At the level of data that they are accruing, they no longer care about data privacy.
They are trying to erode away intellectual property rights and lobby away copyright law. They are trying to—increasingly, we see substantial evidence of these companies following the same path as social media in terms of developing an engagement-centric model of development, where, because they're running out of internet data to scrape, they are just trying to harvest it from users themselves, by getting the users more and more hooked to these technologies.
You also have less and less of an understanding of what is in your data set when it gets that large. In fact, they have no understanding pretty much anymore of what's in their data. And as a result, you end up with all of these downstream consequences, like the mental health behaviors, where they don't even know how to patch the model, because they don't even know what is leading the model to have sycophantic behaviors or psychologically traumatizing behaviors in the first place.
And then you have labor consequences, where during the development of the AI model, if you're working with very polluted data, then you have to involve content moderators to moderate the model's behaviors, and then they end up with psychological trauma as well—just the way that content moderators in the social media era did. And that was just, like, talking about the scaling harms of scaling data.
And then we talk about scaling compute, or super-computers, and you start getting into environmental harms; acceleration of the climate crisis; acceleration of air pollution because the amount of energy that's being used to power these systems is primarily coming from fossil fuel sources; the acceleration of the clean water crisis around the world, because these systems have to be cooled with fresh water, and there's so many more harms from there, but that's like the most obvious ones, when you just look at, like, the fundamentals of how these systems are developed in the first place.
Sara Arjomand: I mean, all of that is incredibly upsetting. How do you think about, kind of, the individual user's role—like, you know, you or me? I mean, is AI something that you use yourself? I'm thinking of products like ChatGPT, Claude…
Karen Hao: Yeah. So I don't use any generative AI because of the work that I've done to report on these issues.
I just—sometimes I use this analogy that AI is a little bit like transportation. There's lots of different modes of transportation, lots of different AI tools. But generative AI, this type of AI that's emerged from this massive scaling approach, you could say is a little bit like the rocket of transportation. And there are actually very few use cases in which it makes sense to use a rocket to fulfill a transportation need, because there are very few things in the world where the use of the rocket actually gives us more benefit than the cost of developing and deploying the rocket. And I mean cost broadly defined—it's not just the financial, but the environmental and everything.
With transportation, I feel like we understand that nuance. But with AI, it seems like we're okay with just giving everyone a rocket to use for everything, and that just doesn't make sense. So I think, from an individual user perspective, the way that I would advocate people to think about their own AI use is: think through what it is that you're doing with these AI tools, and think about the costs of developing those AI tools. Some AI tools actually have very little cost, and at that point you can maybe use them more freely. But with ChatGPT, with Claude, with all these other generative AI tools, the costs are so huge that it does pain me to see how a huge chunk of the use cases is just entertainment. You know—people just generating random photos for fun.
Like, generating an image is just so extraordinarily costly. I really wish we could have alternatives to generating those images that aren't so costly, but we don't have them right now. So ask yourself those questions.
But I think, more fundamentally, more effective than individual user action is collective action. Thinking about how you can, within your school environment, get together with peers, professors, administrators, to have an actual open debate about what the university's adoption of an AI tool should be. Or maybe within your classroom—what your class’s AI adoption policy should be. That is a form of collective governance that I think is more effective than just asking each individual to think for themselves about how they should use or not use certain AI tools.
And beyond the university environment, I think there's plenty of collective action that we're already seeing. Artists and writers are litigating these companies because of the data and intellectual property they've taken. We’re seeing communities that are organizing to protest data center development. These are all different ways that people are contesting how AI has been developed, how it's being deployed, and can actually actively use these different levers to shape the way AI continues to develop and get deployed in the future.
Sara Arjomand: If I put you in charge of the world, what changes would you put into effect at the level of corporations, or maybe at the level of the federal government with regard to AI? I mean, would you snap your fingers and pop OpenAI out of existence?
Karen Hao: Am I allowed to rewind time?
Sara Arjomand: In the hypothetical, yes.
Karen Hao: I think I would rewind the clock back to maybe early social media years and create strong data privacy laws. Create more publicly funded research on the limitations and the potential of digital technologies like social media.
I would create more transparency laws—transparency at all levels. Transparency of the amount of data that companies are collecting, the environmental costs of their infrastructure—and put all of that in place pre-generative AI boom.
And I think we would have just a fundamentally different trajectory for AI as well, because the AI industry could have only manifested the way that it did on the backs of sloppy legislation and regulation. That kind of just allowed these companies to do whatever they wanted during the social media era—accumulate the data that was necessary for training these models in the first place, and start learning how to build supercomputers at the scale that they needed to in the first place.
Yeah. So I think that's what I would do—just reset the clock a little bit.
Sara Arjomand: And if you couldn't reset the clock? If we're just kind of, you know, it's today, September 8, 2025?
Karen Hao: Yeah. I mean, at that point, I would still do, kind of, a lot of the same things. But of course, we are in a tougher position. I still think it is very possible to figure out how to contain the harms of this technology.
But yeah, increasing transparency is a really huge one—just understanding what these companies are feeding into their models, what they're using, what kind of research they're actually doing, and what research they might be censoring internally and not allowing the public to see.
I would make sure that copyright law interpretation falls on the side of creators. I would strengthen labor laws to make sure that there is a basis for collective action to continue, especially in the context of economic opportunity—like having people be able to bargain for their rights without worry of being laid off. Having the ability to maybe even bargain for the longevity of their job in certain ways, and bargain against AI being used to automate certain aspects of their jobs.
And yeah, also vast public funding into AI research and other forms of research outside of the corporations.
Sara Arjomand: Awesome. Thank you so much, and thank you so much for speaking with me.
Karen Hao: Yeah, thank you so much for having me.





Comments