Nvidia Project Digits
NVIDIA's Project Digits is ushering in the democratization of supercomputer power, which will lead to rapid and novel innovations in AI, and lastly, may have some disruptive dynamics in the market, in particular in the cloud space. So at the CES keynote, Jensen Huang announced Project Digits, amongst many other things, but this one's really interesting. It's basically a small personal desktop supercomputer that can run a 200 billion parameter model at one petaflop, which is one quadrillion floating point computations per second, and the whole thing costs $3,000. So it's not cheap, but it's sort of in consumer, prosumer territory, if you will. So for reference, GPT-3 was 175 billion parameters. And this could run that. LLAMA 3, 3.3, the latest one, is 70 billion parameters, so it can run that too. So what does it mean when people can run these types of models locally? And by the way, that's enough power to do fine tuning locally. It's even enough power to do training locally. So I think because of that, you start to see people experimenting with like hyper, hyper personalized models, a level of personalization that we probably haven't experienced yet. I think that will be really interesting. I think this kind of dual system configuration starts to emerge. Actually, A16Z, Mark Andreessen, talks about how they think of LLMs as a new type of operating system. And Project Digits, I think, really reinforces that metaphor a bit, because you're probably going to have people with their standard desktop over here, and then their Project Digits, their supercomputer over here running whatever model they need. And it'll be this kind of dual system workstation. I think there's also going to be a line in the sand, right, at that 200 billion parameter level. Anything under that will start to be considered local models, maybe more personalized models, maybe more specific use case models. And the frontier models are already far larger than 200 billion parameters, but we're going to see an explosion in size, because that line in the sand has been drawn. If it's 200 billion parameters, this is a consumer, prosumer, locally run model. It's not a frontier model anymore. That needs to be hosted in the cloud and inferred from the cloud. And speaking of cloud, that's where the market disruption starts to come into play. So for some folks, it's going to make more economic sense to spend $3,000, run LLAMA 3.3, or whatever the latest version is, locally, and just do all their inference locally, than it is to be continually hitting some cloud-hosted model, where they're having to pay token fees, or however it's priced at that point in time. I think if there's a high level of adoption for Project Digits, it could eat a little bit into cloud revenue. It could also eat a little bit into Anthropic and OpenAI's API revenue as well. I think I want to buy one of these things. It's definitely a little bit expensive, but I feel like I'm more likely to buy one of these than I was the Apple Vision Pro, which I was also very excited about. I just couldn't get myself over the hump of spending $3,500 on a set of goggles. But what do you think? Let me know in the comments. Peace.
No AI insights yet
Save videos. Search everything.
Build your personal library of inspiration. Find any quote, hook, or idea in seconds.
Create Free Account No credit card required