Replying to @coastalgoshawk Explaining what ASI...
ASI, or artificial superintelligence, is the thing that happens after AGI, after we've built an AI that far surpasses human intelligence. And the most common question is, what does that look like? The problem is, by definition, we can't answer that. Let me explain. One of the things you may intuitively already know about AI just by being a user of so many AI products is that AIs get smarter over time, right? The more you watch TikTok, the better your FYP gets, et cetera. The reason AI models and the entire field of AI gets better over time is for two reasons. Number one, creative people come up with new and novel ways to get computers to mimic human behavior and reason. Someone had to invent the transformer architecture upon which Chachapiti is built. Second, and this is the part that I think not a lot of people truly grasp, unlike any other kind of technology with AI, deep learning, machine learning models, if you want it to get smarter, one of the methods that you can also use is just to throw more compute at it. When you hear people say compute, literally just think metal chips and energy power plants, right? It's the physical things that are required to make AI possible. That's compute. If we plot out the intelligence gains in AI on a graph, we can actually track the rate at which AI is getting smarter. And once we have that trend line, we can use it to predict how smart AI will be well into the future. Where do you think Einstein level intelligence is on this chart? Think it's here? No, it's here. So what happens when we've built a machine that is smarter than a thousand Einsteins combined? The intelligence that would arise over here is so beyond human that it would be certifiably alien to us, OK? It would be like an ant trying to understand the Internet. To wrap your mind around the idea of ASI is to be able to be humble enough to imagine a world in which humans are not the top of the food chain. We are not the apex predator. We're the ants. We simply don't know what we don't know. And that gap is the essence of why trying to understand an artificial super intelligence's behavior or nature is an exercise in grasping at the edges of the unknowable. For some scientists, leveraging this kind of ASI super brain represents the largest scientific frontier of all time. But for others, they say it is no short than a recipe for human extinction. Go to part two to learn about both sides and see which one you agree with.
No AI insights yet
Save videos. Search everything.
Build your personal library of inspiration. Find any quote, hook, or idea in seconds.
Create Free Account No credit card required