AI’s Future Looks Like the Human Brain 🧠Ilya S...
Last weekend Vancouver had Taylor Swift. This weekend Vancouver has the founder of OpenAI talking about the future of AI agents and superintelligence. I guess Vancouver is the place to be. So Ilya was there and I want to talk about what he said because really what everyone wanted to know is how Ilya sees the future as a founder of safe superintelligence. Like he does it does what it says on the tin. He's building superintelligence. So what do you think? How do we get there? Right? Like that's the question you have. So Ilya sits there and the first thing he does is shock everyone in the room by saying he kind of agrees that we have an issue with training models. At the end of the day his point is there are no more internet scale data sources left so you can't really train more models that way. You can't give them more internets. There's only one internet. And he says if we're out of pre-training where does intelligence go? And Ilya suggested a couple of ways forward and they got increasingly speculative. I'm going to give you two of them. One, this one's less speculative. He said reasoning is a step forward but reasoning might not look like what we think. If you remember back in the day when chess programs started to beat grandmasters at chess one of the ways they did it was by overcoming traditional assumptions about logic and making correct moves that humans found very surprising. Ilya called on that metaphor and said logic is coming to AI. We will have step-by-step reasoning but everyone assumes it will be logic that we understand intuitively. The history of chess suggests it won't be. It will be logic that we find absolutely shocking even if it's correct. And I thought that was a really interesting one to think about. The second one that he called out, and it's a little bit more speculative, is that he thinks that there's a path for AI to start to self-reflect and teach itself how to improve. And the analogy he gave there has just stuck in my head. He said that in a sense we're at the same place that human brains were at a few hundred thousand years ago. We're not the biggest brains in the business. Other mammals have bigger brains. But we have figured out how to use our brains really effectively. And he gave the example of eyesight. Our neurons are too slow to process eyesight properly, but we have figured out how to hack them and run massively parallel processes so we can see properly. In the same way, he thinks that AI is going to be helpful in teaching AI how to make the most of the resources it has at its disposal, perhaps including synthetic data. We've seen some progress there recently, in order to get intelligent, in order to continue to scale. So in a sense, if you're asking, is Ilya pessimistic or optimistic, the truth is Ilya is just thoughtful. And he sort of agrees with the people that were complaining about there being a training wall in November. But he also disagrees with them, because those people assumed that meant the end of intelligence scaling. And Ilya doesn't believe that. Ilya thinks there's still a path to superintelligence. He's still the founder. He believes in that thesis, and he's still building that way. He just thinks it may not happen through pre-training, because we've kind of got one internet, and we've already used it. And we need to find other ways to build on top of these trained models to continue to scale intelligence, like reasoning. So that is your three-minute summary of Ilya's 30-minute conversation at NIPS. And I tried to keep a safe straight face by saying that, and I hope you enjoyed it. And let me know what you think in the comments. Cheers.
No AI insights yet
Save videos. Search everything.
Build your personal library of inspiration. Find any quote, hook, or idea in seconds.
Create Free Account No credit card required