
By Azeem Azhar
Published on [Insert Date]
This summary distills Azeem Azhar’s year-end synthesis of AI, energy, and geopolitics for builders moving from software to physical systems. It explains why the next growth phase depends more on grid capacity and process knowledge than raw model reasoning.
Podcast Link: Click here to listen

Hello everyone, it's Azim here and as the year comes to a close, I have been reflecting on why I started this podcast nearly a decade ago. It began as an attempt to understand how exponential technologies are going to shape the world. And in truth, I just wanted to have some great conversations. Now that lens and those discussions have taken us through many themes including artificial intelligence, energy systems, biology, pandemics, geopolitics, and many of the other deep structural forces that influence our world. And yet the very first episode back in November 2016 centered on a question that remains as relevant today as it was then. How is work evolving in the face of transformative technology? Back then I discussed this with Ryan Avent who was then a writer at the Economist. But the question has been core to my analysis this year and it sits inside a wider set of concerns. Artificial intelligence is starting to behave like a real general purpose technology and we're watching an enormous industry and economic architecture form around it. It's raising the live question of whether this is sustainable progress or the hallmarks of an investment bubble. Work itself is being rewired from labor markets to agentic workflows and the way teams are organized. In the US and the UK, graduates are bearing the brunt of changes in the labor market. But is the softening in demand for their skills as a consequence of technology or something else? The physical world has reasserted itself as the buildout of compute and the struggle for energy and grid capacity become the binding constraints of a move to an AI economy. More than a decade ago, we used to say that software was eating the world. Today, we've realized that software is demanding a new one. All of this is unfolding within a more fractured geopolitical environment, shaped above all by the evolving relationship between the United States and China. In this episode, I pulled together highlights from the most interesting conversations I had on all these questions over the past year. You'll hear from Matthew Prince, Steve Shu, and many more. These are just glimpses of the exceptional dialogues I was fortunate to have this year. Please go to Spotify, Apple Podcasts, YouTube, or the Exponential View Substack page for a full access to all of them. Let's begin with the question that's dominated 2025. Is all this progress real? How fast is AI improving? What's driving that progress? And how is the economy bending to accommodate it? We've looked closely at whether this is sustainable growth or the familiar shape of a speculative bubble. The clips that follow will give you a sense of the new development loop inside Frontier Labs, the economic dynamics of the industry, and the strategic bets that will define the coming years. We've also built a datadriven live dashboard to track the most important developments in this investment cycle. You can find it at boom orbubble.ai.
Speaker 1: So let's imagine my son 20 years old and he wants to build a product on top of open AI. Where is a good place for him to go and build it?
Speaker 2: Sam said this one time and it stuck with me. He said, "If you're building a company and you're building at the frontier of, you know, the model capabilities, if you're building something that really just barely works and you can't wait for our next model because, you know, it's going to make your product sing, then you're probably building in the right place. If instead you're building like some sort of scaffolding around that covers up the weaknesses of a current model and you're actually afraid of our next model because it might not have those same weaknesses."
That's a bad place to be building because on average models are going to improve really fast and what's a weakness from one model will not be a weakness of the next.
Speaker 3: We have this price discovery mechanism and we have this efficiency across the market and then we have what I can only describe as a socialist suggestion from you which is that smaller companies should sort of prata pay much much less than bigger companies. Tell me how that works out.
Speaker 4: So I think it's actually very capitalist not communist at all. And what you're as an AI company really paying for is on behalf of each of my users do they get access to this content? It's almost like a subscription fee. If you have an AI that only one person uses, they're going to pay, you know, a relatively dimminimous amount. Whereas, if you have six billion users like a Google does, then of course they should be paying more because the value is being spread across a much wider population.
Speaker 5: There's a really interesting idea which starts with the fact that LLMs don't get addicted to dopamine. They are this aggregate of human knowledge and their training and the reinforcement learning to get them to behave in particular ways but it's quite hard to get them to be extreme by the time they reach us. There is also this really important point which is we can start to identify more clearly where there are gaps and and and opportunities.
For the first time in human history we effectively have a mathematical model for the representation of all of human knowledge. you also then inherently get a pretty good model for where the holes in human knowledge are. And so again, I picture it like a block of Swiss cheese where there's a lot of cheese that's there, but there's a lot of different holes because the business model of the web is going to change no matter what. It's going to change because answer engines are coming and they're better and as a result, the business model is going to change. What I hope it changes too is one in which we reward content creators that create content that fills in the holes in that sheets where they're actually doing things where they're making things better.
Speaker 6: I heard a lot of people there talking about the prospect of AI driving 10 20 25% of economic growth. There would be this bounty that AI would bring us and I don't really agree with that. What are those projections getting wrong?
Speaker 7: They're ignoring the allcritical role of human imperfections in systems and institutions. So the more powerful AI becomes, and I'm not a pessimist on that, the more it bumps against people who don't want to adopt it or institutional systems where that does not get incorporated into the workflows. The general economic point is that as one thing in your economy gets better, the remaining imperfections become all the more important. when you look at healthcare, education, government, the nonprofit sector, as you work in these as I have, you will see uh the rate of improvement is going to be pretty slow.
Speaker 8: I've been on Substack for eight or nine years now. One of the things that struck me and this does connect in a in a sense to our questions of trust and business models is that Substack funded pundits like me are being paid to be believed. Right? It's about my reputation. It is moving from a world of selling attention to advertisers to selling conviction to subscribers. And that is a different way I think of thinking about the relationship between you know an expert and the pe the people in their network and in their in their community.
Imagine if you could have a substack where you got an Apple bomb and Tom Nichols like and David from right like and Jeff Goldberg and it was $80 for just that substack. Well, it's like the Atlantic. You get a hundred of them. Um, and you know, so from a reader perspective, subscribing to the Atlantic versus subscribing to a whole bundle of Substacks is really appealing.
Speaker 9: I don't think there's any doubt that power has moved. You know, it's true in media. It's true in all kinds of it's it's even true in the NBA, right? Where you don't you follow teams less than players like the All-Star game. It's team LeBron versus team Giannis. It's not East versus West or whatever. You know, you see it in all kinds of industries where power has moved from the center and from the brand to the individual. And so our challenge at the Atlantic is how do you make the individual writers stay? How do you give them incentives? So why would you write for the Guardian or the Times? Well, you'd get a salary, but you'd also get read or at least you you get distributed.
Speaker 10: So one could argue that some aspect of the western condition and you know any sense that there are the malaise perhaps may come from a an an excess of comfort, right? So the appeal of being sort of supererved by a a a network of thought also sits counter to something that has al also been inherently human which is we have to struggle and we often do struggle and struggle forms a part of the creation of our moral virtues. I don't believe in utopias. I I think they're they're a bad idea even if you can make one. I believe in protopia which is this incremental move to slight betterment even if it's only 1% improvement over time and that you accumulate if we can create 1% more than we destroy each year we can accumulate a civilization I think the struggles are never going to go away the struggle is the process we may change the kinds of things that we are occupied and struggled with but I see that again as progress the human really liked the fact that when I turn the tap on the faucet in my kitchen. I know clean drinking water will come out of it. Behind that is a whole load of really complicated stuff. And as our world gets more comp complicated, we we actually I think do want to abstract away from that that complexity.
Speaker 11: You mentioned complexity. I have another term. I use the I use the word options, possibilities, and opportunities. That's what we get with technology. It's not just complexity. It's it's the fact that we have more options and more choices about what we want to do in life. And by the way, if we want to be an Amish farmer, farming the old way, that's still a possibility. And that is there. If you want to be a mathematician, if you want to be a ballerina, if you want to be a mortgage broker or a web designer, you now have that possibility. And we have even new ones in the in the future. And so that's what technology gives us. You come to the city which is a possibility factory and you come to modernity which is increasing the options.
Speaker 12: One of the most significant shifts this year has been that AI is rolling out into our workplaces. It's altering tasks, roles and organizational flows and it's colliding with other long-running forces that impact the labor market such as post-pandemic norms and all the economic uncertainty that surrounds us. The result is a complex and sometimes contradictory picture. I spoke with people who each hold a different part of that story and together they help us understand how work is being rewired.
Speaker 13: We have been researching putting that brain into a small plush toy. So imagine a small dinosaur bunny rabbit that your kid is carrying under her arm. She's your kid is 2 or 3 years old. And this thing is teaching your child perfect French or how to count or telling it stories uh singing little songs. And we find that the systems we built kids really enjoy uh interacting with. And just imagine the learning possibilities like there's a window of neuroplasticity where you can acquire a foreign language at native level fluency when you're quite young if you just hear it. Maybe you talk to your nanny second lang. Well, now every kid can have that.
Speaker 14: Isn't there a case that we've gone over the hill, over that point where the languages were high level enough, abstracted enough that we could understand them to a point where we're going to be creating so much code so rapidly that it'll be less inspectable for for the human?
Speaker 15: I believe you're already past that point. um with or without AI we passed that point you know a few years ago when it was became clear that the majority of projects commercial or not are based on 90%ish you know plus or minus of open-source libraries it's very rare that a single person can navigate that whole codebase right so what do companies do they divide and conquer and they assign you know subsystems uh uh to to individual teams I think you you could say oh well you know AI you know seems to affecting the execution of tasks which you know young people do and rather than the orchestration which favors more senior workers because you know in the age of generative AI we're essentially managing these bots to go and to go and do things and execute on some subtasks.
I could see a world where where even entry- level workers are kind of middle managers in a sense they have more orchestration of systems to do and that that is kind of how we think about middle management. you know, they're they're sort of piecing things together to ultimately deliver, you know, something more more abstract.
Speaker 16: Employers have a kind of high discount rate. They're they're optimizing for the short term more than the long term. And young workers generally, entry- level workers are workers that are more uncertain, whereas more experienced hires are the safer bets.
Speaker 17: That is such a fair summary. and taking on board a millennial who might need their kombucha and early break on a Friday afternoon is higher risk than whipping the corporate wage slave in their 50s. How do you solve for that novelty risk?
Speaker 18: One of the the the the reasons we had discussed that managers were slowing down their new hiring was that Jason from outside the firm who's 24 and just finished a grad degree is not known to the firm and Jason can get all his orchestration skills and he does it through Khan Academy and Exec and whatever else but he's still outside the firm. So are there any mechanisms that allows that signaling to derisk for the manager?
Speaker 19: It's a tough part of labor economics because you know signals have been eroding over time. So, you know, higher ed was always like a great signal and you get, you know, a diploma and that was always a good signal and still is. That's part of education and part of education is actual actual education, actually increasing your your skills and that part is more and more suspect.
Speaker 20: We've often argued that technology is increasingly weightless, that computing is intangible, that information defies gravity. But what we've learned this year and the year before is that all of these technologies, this AI fabric is really fundamentally physical. And the race to build compute, secure power, and upgrade grids has become one of the defining challenges of this period. But at the same time, the energy system is changing because energy is shifting from being a simple commodity that we pull out of the ground to a technology that we can develop, iterate, and innovate on. In this section, we're going to hear from people who've been examining how these physical limits shape what's possible in the digital realm.
Speaker 21: This week has been pretty bleak for energy transitioners. A lot of people will believe that we're going too fast, that this is happening hell to skeleton, and you can understand why why that is. We're halfway across the road. Now, if you've ever seen anyone hesitate crossing a multi-lane highway, that is a very dangerous thing to do. The best thing is to get right across as quick as you can, and that's what we have to do in the transition now. And to do that, we need to be taking sometimes hard decisions. The faster we get across the rest of the road, the safer we're going to be. Because at the moment you kind of try to ride two horses at once and that's really never very effective.
Speaker 22: Certainly the minister in the UK when I hear him speak Ed Milliband what I hear is somebody who is first and foremost thinking about a political objective which is net zero. A number of the advocates are promoting it in a way that is very ideologically driven that is probably not very persuasive frankly compared to the notion of I'm not going to pay an energy bill. I was joking to someone recently that I'm not a good salesman which is why we had to build octopus to show not you know we are now building homes we signed deals in eight of the biggest house builders in the UK to build homes where there will never be an energy bill because have a combination of heat pump and solar panels we shouldn't be trying to convince people this should be able to show it the lived experience beats the technism the point about power is really really critical I mean we've seen the tremendous tremendous demands that AI data centers of 2025 are already putting on not just the US grid but the supply chain and the regulations that slow things down and maybe the US has a lead in reasoning models right now but there are other big advantages that China can bring to the table which involve much bigger aspects of hardware.
AI is requires a lot of power to both train and to actually run. And China right now is producing by the end of this year 500 gawatts of new solar installed. The US will have about 50 gawatt. So just one order of magnitude difference. China right now has 33 nuclear plants under construction. The US has zero.
Speaker 23: These shifts, both technological and economic, sit within a world shaped by rising geopolitical tension. The dynamic between the United States and China is becoming the central influence on how AI and its supporting industries develop. It affects supply chains, national strategies, and the emerging logic of sovereign AI. Let's hear from guests who've examined China's political economy, its manufacturing depth, and the strategic consequences of a splintering global system.
Speaker 24: There is that, you know, the mRNA vaccine, the smartphone or or GPS. These are things that have emerged from you know the American innovation stack. Has a Chinese stack produced anything that is comparable?
Speaker 25: My view is that science is for the most part a public good. Does not matter whether the innovation comes from university in Hjo or Stanford University in California. It's whoever can make the most use of it and the American firms have not made very good use of it and the Chinese have done a much better job. So maybe this process of innovation is less important. Maybe what matters the most is just having most of the industry and being able to iterate upon it. Most of what technology is are the sort of things that cannot possibly be written down. When you have a workforce that is endlessly just working through all of these different products, producing your knowledge, training each other, that is a lot of what the US as well as the UK have lost in which the US manufacturing employment keeps going down. Right now it stands at about 12 million people. For China it's about 70 million people and I think that really places the US at a disadvantage when the stock of process knowledge is unwinding.
Speaker 26: There are different versions of techno accelerationism. There is a technoacelerationism that comes out of Silicon Valley which is really about building AGI but there's also a policy accelerationism and you know as a slightly distant observer of China what I see is a a type of techno acceleration on the deployment side get it out there get it used an acknowledgement that there's going to be job losses and labor market uh ruptions but a willingness to push past that.
Speaker 27: Yeah, I mean that question of like which societies will hold up better in a time of sort of rapid employment turnover is one that I find fascinating. At what point is a system going to feel like change is coming too fast? And then the sort of like uh anesthesiologist lobby that sort of thing to me is going to be the more relevant variable of which country is going to gain the most from this.
Speaker 28: I read this great essay by Kaiser Quo. He says China has become a principal architect of modernity. Quo argues that legitimacy is actually moving from procedure to performance and that the Chinese track record of the last 30 years is really about that performance. If we're calling China the sort of like archangel of modernity we also have to understand that there are parts of that modernity which I hope listeners of this are very uncomfortable with. But I agree with Kaiser from the perspective of there are really lessons that the US and other countries around the world should learn about when it comes to cultivating strategic industries and giving policy support and knowing when to pull that policy support away and let the companies sort of run on their own.
Speaker 29: Thanks for listening. You can find every full conversation on YouTube, Apple Podcasts or Spotify. For deeper weekly analysis of these trends, join me at exponentialview.co. The coming year will test many of the assumptions we've discussed, and I look forward to exploring them with you. Let me know in the comments which podcast was your favorite and what you'd like to see more of in 2026. Happy New Year.
[music]