
By Turing Post
Date: October 2023
This summary unpacks why predicting AI's future trajectory is harder than ever, offering a framework for crypto investors, AI researchers, and tech builders to navigate this new, unpredictable landscape. Understand the evolving definition of "singularity" and how multi-layered progress demands a new approach to strategy.
Reflecting on a 1984 Computer Chronicles episode, the host observes a stark contrast between past AI predictions and today's reality. While early AI pioneers accurately foresaw natural language interfaces, they operated in a world of clear bottlenecks and linear progress. Today, AI's advancement feels less like a single path and more like a simultaneous explosion across countless dimensions, making future narration nearly impossible.
"The timeline was wrong. It took decades longer than many people expected when we reach natural language conversations with machines. But the prediction itself was absolutely right."
"If I ask you today what comes next after large language models, there is no single answer."
"In the original sense, the singularity is not a machine. It's a limit of explanation."
Podcast Link: Click here to listen

Recently, my friend Raymond Petite sent me a link to an old video, an episode of the Computer Chronicles from 1984. It's the year that I was born.
This episode is about artificial intelligence. I expected it to feel dated, to have some sort of nostalgia about the good old days. Of course, they were discussing chess playing machines.
There was a guest, John McCarthy, and it was very interesting to see him in the time when artificial intelligence was in the time of hype. Well, I see no limit short of human intelligence, and then with faster machines one could do the equivalent that a human could do in a short time in a very long time.
But then there was a moment that really caught my attention and made me think about our current situation.
When Nils Nielson was the director of the AI center at Sri International, when he was asked about the future of AI, his answer was very clear. Another very important application is computer programs that are able to converse with humans in English, everyday ordinary language, and that's going to make computers accessible to a much wider variety of people.
So the timeline was wrong. It took decades longer than many people expected when we reached natural language conversations with machines. But the prediction itself was absolutely right.
What makes this especially striking is the context. There was the symbolic error of AI. Deep learning was not on anyone's mind yet, at least not in any practical sense. Compute was limited. Data was limited. The dominant paradigm was rule-based systems and explicit knowledge representation.
And still he could name what came next. He was not guessing. It was not an assumption. It was how he made this prediction. There was a sense that progress had a direction, that there was a bottleneck. That natural language was hard but clearly the next thing to solve.
In other words, progress felt linear enough to narrate. Even uncertainty had structure. And that's what made me thinking about how different things feel now.
If I ask you today what comes next after large language models, there is no single answer. Sometimes it's agents. Is it memory? Is it robotics? Is it multimodality? Is it world models? Is it better infrastructure? Economics, regulation or all of it at once?
Natural language is no longer a frontier. It's the interface, but there is no single replacement frontier that everyone agrees on. Progress doesn't feel sequential anymore. It feels layered, interconnected, hard to summarize in one sentence.
And that's where the word singularity starts showing up more often. If you look on Elon Musk's Twitter, you will see that he repeatedly says that singularity is near, that we are on its event horizon. And most recently, he said that 2026 is the year of singularity.
Every time I hear it, every time I discuss it with my husband because he also thinks that we are in the beginning of the singularity, I pause and it's not because I disagree or agree specifically, it's that I'm trying to understand what exactly do they mean. So I try to unpack it.
To do that, it helps to go back even further before AI as we know it today existed. In 1958, Stanislav Ulam reflecting in conversations with John Von Neumann wrote about accelerating technological progress and changes in the mode of human life giving the appearance of approaching a kind of singularity in human history beyond which human affairs as we know them could not continue.
That sentence is often quoted, but it's also easy to misunderstand. Von Neumann was not talking about artificial intelligence becoming autonomous. He was not predicting machines redesigning themselves.
The phrase can be interpreted in different ways, but at its core, it seems to describe something simpler. Acceleration reaching a point where familiar ways of understanding stop working.
In the original sense, the singularity is not a machine. It's a limit of explanation. Decades later, the term takes on more specific meaning, especially through Ray Kurzweil and his book, The Singularity is Near.
In that version, the singularity becomes a future threshold driven by compounding technological capability where artificial intelligence improves itself autonomously and human ability to predict or steer outcomes collapses.
That's a definition most people implicitly have in mind today. And by that definition, it's hard to say where we are because current AI systems still depend on human decisions.
Human in the loop is present on data curation on hardware energy capital institutions control hasn't disappeared at all even if it's become more distributed and slower to act even in the recent situation with malbot human was still in the loop they were still creating this space for robots to communicate.
So what's going on why does singularity language feel relevant now even when the strict definition doesn't quite apply and this is where I come back to that moment in 1984 because back Then even without knowing timelines, people could correctly say what came next.
Progress had a dominant bottleneck. Progress was linear. Natural language was hard. So it became the focus.
Today there is no single bottleneck like that. Do we need compute for pre-training? Do we need compute for mid-training? Or do we need compute for post- training? And where is the most thing to uncover and discover and explore?
Everywhere. We don't have an exact bottleneck. shouldn't have anything linear anymore. Progress happens across many interesting layers at once.
Models improve, tools change, deployment patterns shift, costs drop, institution react, governments react, second order effects start to matter even more than first order improvements and that makes the future almost impossible to narrate. It becomes almost impossible to see beyond a few months.
So when someone says that 2026 is the year of the singularity, I think it's not about machines becoming sentient and us becoming united with machines. It's becoming closer to that original Von Neumann intuition that the pace and structure of change no longer fit the way we're used to talking about progress.
It's mind-blowing for us humans. We however hard it is to predict the future we were able to do that now the future stopped being easy to describe even without dates.
So, I don't know. I don't have an answer to the question, is 2026 the year of singularity? But I'm pretty sure we entered a period where our ability to predict, to clearly say what comes next has weakened, almost disappeared.
Even though our ability to build new systems keeps improving though I don't know what exactly is singularity my gut tells me we are in the beginning of singularity and there will be a moment when we will need to look back and ask ourself when did this start when was the moment when the singularity started I bet 2000 the end of 2025 and 2026 will be that moment where we will be able to attach the date to the event.
Is it true? I don't know. It's just absolutely fascinating for me to live in the times when so many things happening at once on so many layers. It's just absolutely mind-blowing.
That becomes the most important question is how we as humanity will deal with this unpredictable uncertainty. This is my thinking process. There is no conclusions in this particular episode.
And in general, I address my audience through attention span as a way of thinking together. It's my train of thoughts.
I would love to hear your thoughts. I always appreciate when you leave your comments and when we can talk about it.
Are we in singularity? What does it mean? Is Ray Kurzweil right about his with his definition or John Von Neumann is more accurate about just singularity becoming when event horizon is impossible to predict?
We cannot look beyond event horizon. What do you think? I'm very curious. Please let me know and let's think together. Let's determine the present.
Even if we cannot anymore predict the future, it will help us later. Thank you. Please subscribe, share, and as I said, leave your thoughts in the comment section.