AI in Logistics Here’s an example story about h...
You know, in 12 days of OpenAI, we've really missed the chance to come up with use cases, because we've been so busy following the news, and Google has been dropping things left and right. It's really hard to keep track of what you do with all this stuff. I'm going to start doing a little series, let me know if you want me to do more of these. These are not real stories, but they are reasonable examples of how you use these models. And I emphasize that because I think that we need what I would call simple use cases to help us understand how these models might be useful in real-world work scenarios. The story I'm going to start with is a story around logistics and trucking companies and the O1 model. Let's imagine that you're running a thousand trucks with Internet of Things sensors on them, right? So you get LiDAR, you get gas mileage, you get where the trucks are, you get what's on the trucks, all of it's streaming in. You have a lot of data coming into your systems, but you install those sensors at different times. They run into different data pools. It's not all optimized, and the traditional approach would be to spend sort of a long time trying to get everything into a single data lake in order to make it simple and easy to query. And you don't have a lot of time. Your executives want to move fast. They want to make sure they can understand what's going on, and they've heard about this AI thing. And so they want a large language model query that they can just say, type in a question about where the trucks in North Carolina are, and they can just get that answer right away. That sounds unreasonable to the engineering team. They're sort of frustrated by it. But O1 is out, and they have O1. So what they decide to do is to use O1 for some of the busy work they don't really have time for. They use O1 to refactor their APIs so that they can get real-time streaming off of the sensors as they come, regardless of granularity. So wherever the trucks are, it's real-time streaming out. They use something like DreamFactory to handle the middleware of securing the API and serving it, and then they stand up a dashboard, and off they go. Now, what O1 did was it essentially refactored the way data is pulled out of those data lakes, the way data is pulled off of the data streams, and it enabled the company to get dashboards up that have some kind of natural language query capability much faster. Because you can also put and embed a large language model on the top with the data that's coming in, because O1 is able to think about and anticipate the eventual use case and say, this is not just for a BI dashboard, this is also for a queryable dashboard, and I need to think about my data schema accordingly. And that's where something like O1 is going to be useful in a way that a conventional model or even a classic just bootstrap an API from scratch model would, it's just not as easy, right? So wrapping all of this up, if you're lost in API land, the point here is that you use O1 where it makes sense to solve difficult problems. In this case, there was a time constraint, there was a complexity constraint, there was a real-time streaming of data constraint, and there was also an end constraint on the customer wanting to query the data with natural language. O1 is able to put those things together and come up with an API schema that works and helps to accelerate getting things like piped through and served up so it's actually effective. That's an example of how you can use a real-time problem with O1 to actually deliver value. It's not the only example I can think of. I can think of a bunch more, but it's a good one.
No AI insights yet
Save videos. Search everything.
Build your personal library of inspiration. Find any quote, hook, or idea in seconds.
Create Free Account No credit card required