
Nathan Lambert on what matters in the AI and science world
By Turing Post
This summary dissects the strategic imperative for the US to champion open AI models, moving beyond proprietary systems to accelerate innovation. It offers crypto investors, AI researchers, and tech builders a clear view of how open source shapes future technological leadership and scientific breakthroughs.
This episode answers:
"The future of AI isn't about who builds the biggest model, but who builds the most accessible foundation for everyone else to innovate upon."
"Relying solely on closed, proprietary AI is like trying to win a marathon with one shoe – you're hobbled by design."
"National security in the AI era means ensuring our brightest minds can build, inspect, and improve the core technologies, not just consume them."
Podcast Link: Click here to listen

So, I'm Nathan Lambert. I'm a research scientist at Hugging Face. I work on a variety of things, mostly related to open source AI, open science, trying to make the world a better place with better technology.
I think the framing of open versus closed is a little bit of a misnomer. I think it's really about control. Who controls the models? Who controls the data? Who controls the compute?
And right now, that's very centralized in a few companies, and that's not a great place to be.
Why is that not a great place to be?
Well, I think there's a few reasons. One is just a resilience argument. If something happens to one of those companies, then we're all kind of screwed. And I think that's a bad place to be from a national security perspective, from an economic perspective, from a scientific perspective.
I think the second reason is that it limits innovation. If only a few companies have access to the best models, then only a few companies can build on top of them. And that's not a great place to be if you want to have a vibrant ecosystem of startups and researchers and academics.
And then I think the third reason is that it limits accountability. If only a few companies know what's going on inside these models, then it's very hard to hold them accountable for their behavior.
And so, I think that's a bad place to be from a societal perspective.
I think open source is a really important tool for evening the playing field and making sure that we have a more distributed and resilient and accountable AI ecosystem.
I think there's a lot of different ways to define open source. I think the way that I think about it is that it's about having access to the code and the data and the ability to modify and redistribute it.
And I think that's really important for a few reasons. One is that it allows you to understand what's going on inside the model. You can actually look at the code and see how it works.
And I think that's really important for accountability. If you don't know how the model works, then it's very hard to hold it accountable for its behavior.
And then I think the second reason is that it allows you to modify the model. You can actually change the code and make it do something different.
And I think that's really important for innovation. If you can't change the model, then you're limited to what the original developers thought it should do.
And then I think the third reason is that it allows you to redistribute the model. You can actually give it to other people and let them use it.
And I think that's really important for access. If you can't redistribute the model, then only the people who have access to the original model can use it.
I think there's a lot of different ways to make AI more open. I think one is just to release the code and the data. And I think that's a really important first step.
But I think there's also a lot of other things you can do. You can release the training logs, so people can see how the model was trained. You can release the evaluation metrics, so people can see how well the model performs.
You can release the safety evaluations, so people can see how safe the model is. And I think all of those things are really important for making AI more open and accountable.
I think the US government has a really important role to play in making AI more open. I think one is just to fund open source AI research. I think that's a really important way to make sure that we have a vibrant ecosystem of open source AI developers.
I think the second is to require that any AI that's used by the government is open source. I think that's a really important way to make sure that the government is accountable for the AI that it uses.
And then I think the third is to create a regulatory environment that encourages open source AI. I think that's a really important way to make sure that we have a level playing field for open source AI developers.
I think there's a lot of different ways to think about the risks of open source AI. I think one is that it could be used for malicious purposes. People could use it to create fake news or to automate disinformation campaigns.
But I think that's true of any technology. And I think the benefits of open source AI outweigh the risks.
I think the second risk is that it could be used to create biased or discriminatory AI. But I think that's also true of any technology. And I think the benefits of open source AI outweigh the risks.
I think the third risk is that it could be used to create AI that's too powerful. But I think that's also true of any technology. And I think the benefits of open source AI outweigh the risks.
I think the best way to mitigate the risks of open source AI is to make sure that we have a strong ecosystem of open source AI developers who are committed to safety and ethics.
And I think that's what we're trying to do at Hugging Face.
I think there's a lot of different ways to think about the future of AI. I think one is that it's going to become more and more powerful. And I think that's going to have a lot of implications for society.
I think the second is that it's going to become more and more accessible. And I think that's going to have a lot of implications for the economy.
And I think the third is that it's going to become more and more open. And I think that's going to have a lot of implications for democracy.
I think the most important thing is to make sure that we're thinking about the ethical implications of AI and that we're building AI that's aligned with our values.
I think that's what we're trying to do at Hugging Face.
Key Takeaways: