Why Reid Hoffman feels optimistic about our AI future
In Reid Hoffman’s new book Superagency: What Could Possibly Go Right With Our AI Future, the LinkedIn co-founder makes the case that AI can extend human agency — giving us more knowledge, better jobs, and improved lives — rather than reducing it.
That doesn’t mean he’s ignoring the technology’s potential downsides. In fact, Hoffman (who wrote the book with Greg Beato) describes his outlook on AI, and on technology more generally, as one focused on “smart risk taking” rather than blind optimism.
“Everyone, generally speaking, focuses way too much on what could go wrong, and insufficiently on what could go right,” Hoffman told me.
And while he said he supports “intelligent regulation,” he argued that an “iterative deployment” process that gets AI tools into everyone’s hands and then responds to their feedback is even more important for ensuring positive outcomes.
“Part of the reason why cars can go faster today than when they were first made, is because … we figured out a bunch of different innovations around brakes and airbags and bumpers and seat belts,” Hoffman said. “Innovation isn’t just unsafe, it actually leads to safety.”
In our conversation about his book, we also discussed the benefits Hoffman (who’s also a former OpenAI board member, current Microsoft board member, and partner at Greylock) is already seeing from AI, the technology’s potential climate impact, and the difference between an AI doomer and an AI gloomer.
This interview has been edited for length and clarity.
You’d already written another book about AI, Impromptu. With Superagency, what did you want to say that you hadn’t already?
So Impromptu was mostly trying to show that AI could [provide] relatively easy amplification [of] intelligence, and was showing it as well as telling it across a set of vectors. Superagency is much more about the question around how, actually, our human agency gets greatly improved, not just by superpowers, which is obviously part of it, but by the transformation of our industries, our societies, as multiple of us all get these superpowers from these new technologies.
The general discourse around these things always starts with a heavy pessimism and then transforms into — call it a new elevated state of humanity and society. AI is just the latest disruptive technology in this. Impromptu didn’t really address the concerns as much … of getting to this more human future.
You open by dividing the different outlooks on AI into these categories — gloomers, doomers, zoomers, bloomers. We can dig into each of them, but we’ll start with a bloomer since that’s the one you classify yourself as. What is a bloomer, and why do you consider yourself one?
I think a bloomer is inherently technology optimistic and [believes] that building technologies can be very, very good for us as individuals, as groups, as societies, as humanity, but that [doesn’t mean] anything you can build is great.
So you should navigate with risk taking, but smart risk taking versus blind risk taking, and that you engage in dialogue and interaction to steer. It’s part of the reason why we talk about iterative deployment a lot in the book, because the idea is, part of how you engage in that conversation with many human beings is through iterative deployment. You’re engaging with that in order to steer it to say, “Oh, if it has this shape, it’s much, much better for everybody. And it makes these bad cases more limited, both in how prevalent they are, but also how much impact they can have.”
And when you talk about steering, there’s regulation, which we’ll get to, but you seem to think the most promise lies in this sort of iterative deployment, particularly at scale. Do you think the benefits are just built in — as in, if we put AI into the hands of the most people, it’s inherently small-d democratic? Or do you think the products need to be designed in a way where people can have input?
Well, I think it could depend on the different products. But one of the things [we’re] trying to illustrate in the book is to say that just being able to engage and to speak about the product — including use, don’t use, use in certain ways — that is actually, in fact, interacting and helping shape [it], right? Because the people building them are looking at that feedback. They’re looking at: Did you engage? Did you not engage? They’re listening to people online and the press and everything else, saying, “Hey, this is great.” Or, “Hey, this really sucks.” That is a huge amount of steering and feedback from a lot of people, separate from what you get from my data that might be included in iteration, or that I might be able to vote or somehow express direct, directional feedback.
I guess I’m trying to dig into how these mechanisms work because, as you note in the book, particularly with ChatGPT, it’s become so incredibly popular. So if I say, “Hey, I don’t like this thing about ChatGPT” or “I have this objection to it and I’m not going to use it,” that’s just going to be drowned out by so many people using it.
Part of it is, having hundreds of millions of people participate doesn’t mean that you’re going to answer every single person’s objections. Some people might say, “No car should go faster than 20 miles an hour.” Well, it’s nice that you think that.
It’s that aggregate of [the feedback]. And in the aggregate if, for example, you’re expressing something that’s a challenge or hesitancy or a shift, but then other people start expressing that, too, then it is more likely that it’ll be heard and changed.
And part of it is, OpenAI competes with Anthropic and vice versa. They’re listening pretty carefully to not only what are they hearing now, but … steering towards valuable things that people want and also steering away from challenging things that people don’t want.
We may want to take advantage of these tools as consumers, but they may be potentially harmful in ways that are not necessarily visible to me as a consumer. Is that iterative deployment process something that is going to address other concerns, maybe societal concerns, that aren’t showing up for individual consumers?
Well, part of the reason I wrote a book on Superagency is so people actually [have] the dialogue on societal concerns, too. For example, people say, “Well, I think AI is going to cause people to give up their agency and [give up] making decisions about their lives.” And then people go and play with ChatGPT and say, “Well, I don’t have that experience.” And if very few of us are actually experiencing [that loss of agency], then that’s the quasi-argument against it, right?
You also talk about regulation. It sounds like you’re open to regulation in some contexts, but you’re worried about regulation potentially stifling innovation. Can you say more about what you think beneficial AI regulation might look like?
So, there’s a couple areas, because I actually am positive on intelligent regulation. One area is when you have really specific, very important things that you’re trying to prevent — terrorism, cybercrime, other kinds of things. You’re trying to, essentially, prevent this really bad thing, but allow a wide range of other things, so you can discuss: What are the things that are sufficiently narrowly targeted at those specific outcomes?
Beyond that, there’s a chapter on [how] innovation is safety, too, because as you innovate, you create new safety and alignment features. And it’s important to get there as well, because part of the reason why cars can go faster today than when they were first made, is because we go, “Oh, we figured out a bunch of different innovations around brakes and airbags and bumpers and seat belts.” Innovation isn’t just unsafe, it actually leads to safety.
What I encourage people, especially in a fast moving and iterative regulatory environment, is to articulate what your specific concern is as something you can measure, and start measuring it. Because then, if you start seeing that measurement grow in a strong way or an alarming way, you could say, ”Okay, let’s, let’s explore that and see if there’s things we can do.”
There’s another distinction you make, between the gloomers and the doomers — the doomers being people who are more concerned about the existential risk of super intelligence, gloomers being more concerned about the short-term risks around jobs, copyright, any number of things. The parts of the book that I’ve read seem to be more focused on addressing the criticisms of the gloomers.
I’d say I’m trying to address the book to two groups. One group is anyone who’s between AI skeptical — which includes gloomers — to AI curious.
And then the other group is technologists and innovators saying, “Look, part of what really matters to people is human agency. So, let’s take that as a design lens in terms of what we’re building for the future. And by taking that as a design lens, we can also help build even better agency-enhancing technology.”
What are some current or future examples of how AI could extend human agency as opposed to reducing it?
Part of what the book was trying to do, part of Superagency, is that people tend to reduce this to, “What superpowers do I get?” But they don’t realize that superagency is when a lot of people get super powers, I also benefit from it.
A canonical example is cars. Oh, I can go other places, but, by the way, when other people go other places, a doctor can come to your house when you can’t leave, and do a house call. So you’re getting superagency, collectively, and that’s part of what’s valuable now today.
I think we already have, with today’s AI tools, a bunch of superpowers, which can include abilities to learn. I don’t know if you’ve done this, but I went and said, “Explain quantum mechanics to a five-year-old, to a 12-year-old, to an 18-year-old.” It can be useful at — you point the camera at something and say, “What is that?” Like, identifying a mushroom or identifying a tree.
But then, obviously there’s a whole set of different language tasks. When I’m writing Superagency, I’m not a historian of technology, I’m a technologist and an inventor. But as I research and write these things, I then say, “Okay, what would a historian of technology say about what I’ve written here?”
When you talk about some of these examples in the book, you also say that when we get new technology, sometimes old skills fall away because we don’t need them anymore, and we develop new ones.
And in education, maybe it makes this information accessible to people who might otherwise never get it. On the other hand, you do hear these examples of people who have been trained and acclimated by ChatGPT to just accept an answer from a chatbot, as opposed to digging deeper into different sources or even realizing that ChatGPT could be wrong.
It is definitely one of the fears. And by the way, there were similar fears with Google and search and Wikipedia, it’s not a new dialogue. And just like any of those, the issue is, you have to learn where you can rely upon it, where you should cross check it, what the level of importance cross checking is, and all of those are good skills to pick up. We know where people have just quoted Wikipedia, or have quoted other things they found on the internet, right? And those are inaccurate, and it’s good to learn that.
Now, by the way, as we train these agents to be more and more useful, and have a higher degree of accuracy, you could have an agent who is cross checking and says, “Hey, there’s a bunch of sources that challenge this content. Are you curious about it?” That kind of presentation of information enhances your agency, because it’s giving you a set of information to decide how deep you go into it, how much you research, what level of certainty you [have.] Those are all part of what we get when we do iterative deployment.
In the book, you talk about how people often ask, “What could go wrong?” And you say, “Well, what could go right? This is the question we need to be asking more often.” And it seems to me that both of those are valuable questions. You don’t want to preclude the good outcomes, but you want to guard against the bad outcomes.
Yeah, that’s part of what a bloomer is. You’re very bullish on what could go right, but it’s not that you’re not in dialogue with what could go wrong. The problem is, everyone, generally speaking, focuses way too much on what could go wrong, and insufficiently on what could go right.
Another issue that you’ve talked about in other interviews is climate, and I think you’ve said the climate impacts of AI are misunderstood or overstated. But do you think that widespread adoption of AI poses a risk to the climate?
Well, fundamentally, no, or de minimis, for a couple reasons. First, you know, the AI data centers that are being built are all intensely on green energy, and one of the positive knock-on effects is … that folks like Microsoft and Google and Amazon are investing massively in the green energy sector in order to do that.
Then there’s the question of when AI is applied to these problems. For example, DeepMind found that they could save, I think it was a minimum of 15 percent of electricity in Google data centers, which the engineers didn’t think was possible.
And then the last thing is, people tend to over-describe it, because it’s the current sexy thing. But if you look at our energy usage and growth over the last few years, just a very small percentage is the data centers, and a smaller percentage of that is the AI.
But the concern is partly that the growth on the data center side and the AI side could be pretty significant in the next few years.
It could grow to be significant. But that’s part of the reason I started with the green energy point.
One of the most persuasive cases for the gloomer mindset, and one that you quote in the book, is an essay by Ted Chiang looking at how a lot of companies, when they talk about deploying AI, it seems to be this McKinsey mindset that’s not about unlocking new potential, it’s about how do we cut costs and eliminate jobs. Is that something you’re worried about?
Well, I am — more in transition than an end state. I do think, as I describe in the book, that historically, we’ve navigated these transitions with a lot of pain and difficulty, and I suspect this one will also be with pain and difficulty. Part of the reason why I’m writing Superagency is to try to learn from both the lessons of the past and the tools we have to try to navigate the transition better, but it’s always challenging.
I do think we’ll have real difficulties with a bunch of different job transitions. You know, probably the starting one is customer service jobs. Businesses tend to — part of what makes them very good capital allocators is they tend to go, “How do we drive costs down in a variety of frames?”
But on the other hand, when you think about it, you say, “Well, these AI technologies are making people five times more effective, making the sales people five times more effective. Am I gonna go into hire less sales people? No, I’ll probably hire more.” And if you go to the marketing people, marketing is competitive with other companies, and so forth. What about business operations or legal or finance? Well, all of those things tend to be [where] we pay for as much risk mitigation and management as possible.
Now, I do think things like customer service will go down on head count, but that’s the reason why I think it’s job transformation. One [piece of] good news about AI is it can help you learn the new skills, it can help you do the new skills, can help you find work that your skill set may more naturally fit with. Part of that human agency is making sure we’re building those tools in the transition as well.
And that’s not to say that it won’t be painful and difficult. It’s just to say, “Can we do it with more grace?”