AI’s Future Looks Like the Human Brain 🧠 Ilya S...
TIKTOK

AI’s Future Looks Like the Human Brain 🧠 Ilya Sutskever, co-founder of OpenAI, just reshaped how we think about AI at NeurIPS (NIPS) 2024. He declared “pre-training as we know it will end” because we’ve hit “peak data.” There’s only one internet, and AI can’t keep learning from the same old content forever. He called internet data the “fossil fuel” of AI, hinting that future models will need new training methods—maybe even generating their own data. But that’s just the beginning. Sutskever predicts AI will soon “reason” through problems rather than just matching patterns like today’s models. He compared it to how chess AIs surprise even grandmasters with moves no one expects. More reasoning means less predictability—and that’s both exciting and concerning. He also dropped a big concept: “agentic AI.” This means AI that doesn’t just respond—it acts on its own, makes decisions, and handles complex tasks. Think of personal AI assistants that can truly manage projects without needing micromanagement. And the wildest part? Sutskever hinted that future AIs could seek rights and coexist with humans—not because we program them that way, but because of how they evolve. He even compared AI’s development to human evolution, where our brains scaled beyond other species in unpredictable ways. We’re not just building bigger models—we’re stepping into a future where AI can think, decide, and maybe even advocate for itself. Ready or not, this is where AI is headed. Hashtags: #product #productmanager #productmanagement #startup #business #openai #llm #ai #microsoft #google #gemini #anthropic #claude #llama #meta #nvidia #career #careeradvice #mentor #mentorship #mentortiktok #mentortok #careertok #job #jobadvice #future #2024 #story #news #dev #coding #code #engineering #engineer #coder #sales #cs #marketing #agent #work #workflow #smart #thinking #strategy #cool #real #jobtips #hack #hacks #tip #tips #tech #techtok #techtiktok #openaidevday #aiupdates #techtrends #voiceAI #developerlife #cursor #replit #pythagora #bolt

3:40 Jun 08, 2025 122,900 6,545
@nate.b.jones
660 words
Last weekend Vancouver had Taylor Swift. This weekend Vancouver has the founder of OpenAI talking about the future of AI agents and superintelligence. I guess Vancouver is the place to be. So Ilya was there and I want to talk about what he said because really what everyone wanted to know is how Ilya sees the future as a founder of safe superintelligence. Like he does it does what it says on the tin. He's building superintelligence. So what do you think? How do we get there? Right? Like that's the question you have. So Ilya sits there and the first thing he does is shock everyone in the room by saying he kind of agrees that we have an issue with training models. At the end of the day his point is there are no more internet scale data sources left so you can't really train more models that way. You can't give them more internets. There's only one internet. And he says if we're out of pre-training where does intelligence go? And Ilya suggested a couple of ways forward and they got increasingly speculative. I'm going to give you two of them. One, this one's less speculative. He said reasoning is a step forward but reasoning might not look like what we think. If you remember back in the day when chess programs started to beat grandmasters at chess one of the ways they did it was by overcoming traditional assumptions about logic and making correct moves that humans found very surprising. Ilya called on that metaphor and said logic is coming to AI. We will have step-by-step reasoning but everyone assumes it will be logic that we understand intuitively. The history of chess suggests it won't be. It will be logic that we find absolutely shocking even if it's correct. And I thought that was a really interesting one to think about. The second one that he called out, and it's a little bit more speculative, is that he thinks that there's a path for AI to start to self-reflect and teach itself how to improve. And the analogy he gave there has just stuck in my head. He said that in a sense we're at the same place that human brains were at a few hundred thousand years ago. We're not the biggest brains in the business. Other mammals have bigger brains. But we have figured out how to use our brains really effectively. And he gave the example of eyesight. Our neurons are too slow to process eyesight properly, but we have figured out how to hack them and run massively parallel processes so we can see properly. In the same way, he thinks that AI is going to be helpful in teaching AI how to make the most of the resources it has at its disposal, perhaps including synthetic data. We've seen some progress there recently, in order to get intelligent, in order to continue to scale. So in a sense, if you're asking, is Ilya pessimistic or optimistic, the truth is Ilya is just thoughtful. And he sort of agrees with the people that were complaining about there being a training wall in November. But he also disagrees with them, because those people assumed that meant the end of intelligence scaling. And Ilya doesn't believe that. Ilya thinks there's still a path to superintelligence. He's still the founder. He believes in that thesis, and he's still building that way. He just thinks it may not happen through pre-training, because we've kind of got one internet, and we've already used it. And we need to find other ways to build on top of these trained models to continue to scale intelligence, like reasoning. So that is your three-minute summary of Ilya's 30-minute conversation at NIPS. And I tried to keep a safe straight face by saying that, and I hope you enjoyed it. And let me know what you think in the comments. Cheers.

No AI insights yet

Save videos. Search everything.

Build your personal library of inspiration. Find any quote, hook, or idea in seconds.

Create Free Account No credit card required
Original