Synopsis: In this Techstrong AI leadership interview, Ghost Autonomy CEO John Hayes explains how artificial intelligence (AI) will supply the missing critical capability needed to make autonomous vehicles an everyday reality.

Mike Vizard: Hello and welcome to the latest edition of the Techstrong AI video series. I’m your host, Mike Vizard. Today, we’re with John Hayes, CEO for Ghost Autonomy, and we’re talking about how AI will enable the next wave of self-driving cars, and we’ll get into just how smart are these cars and how smart they might get. John, welcome to the show.

John Hayes: Hey, good to be here.

Mike Vizard: We all heard about self-driving cars, and we see them from time to time, and of course most of us only see them when there’s some sort of accident that makes the news, but bring us up to speed a little bit on, what is the current state of the art for self-driving cars? There’s clearly some AI at work, but where are we now and where are we headed?

John Hayes: So, I live in Mountain View, California, so I get to see self-driving cars every day because Waymo is here, Nuro is here. A lot of the companies doing the suburban self-driving cars are here every day. At the beginning of AI, self-driving cars were actually considered the application of AI. And a lot of the research for any AI application usually centered around self-driving cars, and you can see that in the data sets, the research community, all that was very, very AI-centric for that purpose. Now, what we’ve seen is that people are finding lots of other applications for AI. So, you have the launch of GPT, which got amazing headlines, sort of brought AI into the consumer space where people started interacting with it directly. And now we’re entering a new wave where it’s coming home back to self-driving cars. So, what we’ve been spending the last six months on is, how do we take that new wave of consumer AI that interprets images directly and English text and apply that into self-driving cars?

Mike Vizard: All right, so for instance, the car might be able to, I don’t know, read a sign or a construction sign on the side of the road and make some adjustments accordingly, or would it also be able to summarize the events that just occurred, so that if there’s an accident we can all know exactly what happened in a way we can all understand?

John Hayes: Yeah, and this is the traditional development for AI and self-driving cars was very centered on road scenes. And so, for anything that you wanted to interpret in the scene, you would have to train that into the models on the car. So, things like reading signs when you couldn’t pull something from map data, you wanted to see something that was live and changing in the environment, you had to train the models that ran in the car specifically for anything you could think of that might occur. And the difference now is that you have models that have been trained on absolutely everything on the internet, all the images on the internet, all the texts you can get your hands on. There’s a giant text data set called the pile, which is a huge body of text, legal text, books, Wikipedia, all sorts of sources. And the nice thing about that is because it has been trained on absolutely everything, you don’t have to know ahead of time exactly what could be encountered.
And so, now you have this new sort of top-down model. So, previously I would call it a bottom-up model where you would sit in a room with a bunch of engineers and you’d try and figure out things that could be seen. You develop a data set for that particular problem and then you’d train that into the car. And now you’re starting from a model that’s been trained on everything and then you’re seeing, how does that apply to road scenes? And you can ask it very general questions, it could be reading signs and you can move all the way through the consequences of that. What should the car do based on the scene that’s presented in front of it?

Mike Vizard: How long does it take between when we make an advance like that and it shows up in a car somewhere? Because a lot of the design cycles around cars are four to five years long. So, can we add these kinds of capabilities to existing cars? What kind of cars do they need to be? A lot of cars are computers and a wheel, but they got to be a powerful car? So, what are the challenges in getting all this into everyday life?

John Hayes: So, we’ve always focused on cars people buy, ordinary consumer cars, sedans, SUVs, and delivery vans, something that you can drive in an ordinary way. And so, what we’re seeing is the current generation of cars, especially the EVs, have pretty powerful computers, and we’ve now seen a convergence. It used to be that you had sort of the separate space where the computers and cars were different from the computers that were in, say tablets, or mobile phones, or laptops, and now we’re seeing convergence where it’s the same computers, and there’s two good things about that. One of them is your software becomes easier. So, more and more of the car functions are being defined in software. The next generation will really add connectivity. So previously, a car would report telematics. It’s like basic information about how the car is operating for maintenance, some information about speed, sometimes there’s electronic data recorder.
But now with new wireless networks and with 4G this works, with 5G it will be even better, you have the ability to send image and video to the data center for further processing. And that sort of really opens up the ability to add advanced functionality to cars that have been out in the next… Cars are in the world for a decade, now you can start adding functionality in the data center where you have much, much more powerful computers that are available, and not only that, you can upgrade and keep up with Moore’s Law every single year when they announce new chips and they deploy them.
And so, I think what we’re going to see with cars is a real acceleration. And so, it definitely takes five years to design a car sort of from the bolts up, but what we’re going to see is another cycle, which is let’s do software updates in that car, and this is an area where Tesla is a leader. They ship a car, they keep it in the market for five to seven years, but they’re constantly providing software updates. And I think we’re going to see that spread across all the car manufacturers, both software updates to add functionality in the car, and also software functionality that will be updated in the data center, and that can be done at a much faster pace than designing a new car. So, I think that’s going to be really transformational for the products that we see.

Mike Vizard: And the AI models that we’re updating the car with, which I think may be sometimes even over the air, are smaller than people realize. We have this perception that AI models are these giant things, but they come in all sizes now, and some of them are massive on the training side, but the ones that make it into the network edge in the car are relatively small.

John Hayes: And that’s definitely been true for the current generation of cars, so this is where you have, call it level two features, driver assist features that aren’t expected to operate at high levels of reliability, because you assume that there’s a person still driving the car and still controlling it. And so, the next generation of processors allows you to put much larger models. This is a 10X increase in the amount of computation you can do in the car, which will make those models dramatically more reliable. And then additionally, you can have extremely large models in the data center. This is where you’re into tens to hundreds of billions of parameters. So, that’s much larger than you could put in a car today. It was probably much larger than you could put in a car for the next decade, but you now have this kind of safety valve where you can say, “Hey, I can take a model, not spend a ton of time trying to tune it down to a limited compute platform, and then I can have a lot more latent information and almost common sense to supervise the car in the data center.

Mike Vizard: What impact is all this going to have on the way we think about car ownership? Today, I buy a car, I lease the car, but going forward, if it’s a self-driving car, would I just subscribe to a service that says a car will come get me, and depending on the use case, it’ll be an SUV or whatever it is that I need at the moment and I’m just kind of subscribing to, I don’t know, something provided by General Motors or Uber or somebody else for that matter?

John Hayes: If you go back a few years, I think that there were certainly people who had that vision in mind, this idea that you wouldn’t necessarily have to own a car, that you would just call a car like a service, an Uber, and it would be a self-driving car. I think what we’re seeing though is that that works really well in dense cities where you can have a large fleet, you can have rapid response time, but if you look in suburbia where most people live, you’re driving kind of point to point. And so, waiting five to 10 minutes for a car to arrive starts to be inefficient, because the car is driving around to get to point to point without carrying anyone, and it really does slow you down.
And so, I think the next generation is going to take these taxi services or people mover services and just make that part of the ordinary car, and almost every auto company has a plan to deliver that type of product by the end of the decade, but what’s missing is new software to make that work that fits into a form factor that is something that people could buy, like something on a scale of low thousands of dollars, not the tens of thousands of dollars of equipment that’s put on current generation taxi cars or utility cars.

Mike Vizard: Well, wait a minute now, shouldn’t the AI know that I have just gotten my check from the restaurant and therefore the car will start moving in my general direction on the assumption that I’m going home or somewhere else? It might send me an alert saying, “Are you ready?” But it seems like we can interconnect the car with a lot of different systems out there.

John Hayes: We could. That’s an enormously difficult integration challenge, and so we just spent three years when they took away paper menus and how did we solve that? We solved that by scanning QR codes, because digital integration at a broad scale is very, very hard. And so, it is possible that these applications could exist. There are new, for example, AI pins and AR applications that could maybe sense that data and automate that process, but we’re at the very, very early stages of that type of integration.

Mike Vizard: Ultimately, can we completely trust the car to drive itself or should somebody be sitting in the wheel just in case? What’s your sense of what’s our comfort level with the safety of all of this?

John Hayes: I think our comfort level today is not very high, and you can see that in the companies deploying completely self-driving cars without drivers, they’ve been very judicious about the conditions under which the cars operate and the geography under which it operates, and even the speeds. And you look at the trouble that Cruise has been having. They have less than 100 cars around and there’s a lot of exposure. I think that the breakthrough that’ll make this deployable is the ability to do billions of miles of testing, because the world is extremely variable. So, there’s a lot of great work going on in simulation. If I know to predict an environment, I can test that the car works against that environment or against that road configuration or against what the behaviors of people around, but ultimately the real test is putting it out on the road.
So, I think that we’ll see a graduated deployment where you’ll have cars driving with a person monitoring it, and then we’ll slowly expand as we get that experience, and there’ll be a tipping point where people will believe it will drive every day. So again, you can look at Tesla, they’ve been deploying full self-driving with, call it low thousands of trusted testers, and they’re waiting for some important signals that it’s functional before they deploy to the millions of cars that they’ve sold. And I think we’ll see more of that type of development where you’ll buy a sort of autonomy-ready vehicle that will still need years of testing across the fleet of tens of thousands to millions of vehicles before it’s fully turned on. And right now, if you look at where government regulations are going, right now it’s a very, very unregulated space because no one quite knows what is the performance regulation to write. And again, I think that that will be derived from experience running these over many years.

Mike Vizard: Now, I’ve talked to people in there like, “That’s never going to happen. I don’t trust the car to do that.” But to be fair to the machines, some of these people I talk to, I know them well and given a choice between relying on their driving skills and the machine, I might go with the machine. So, how do we kind of make some sort of reasonable assessment about whether or not the machine is actually the better driver than the person?

John Hayes: It’s interesting. Everyone believes they’re an above average driver, but also almost everyone I talk to hopes for a world where their children don’t have to learn to drive. So you have to be, say, better than a 17-year-old. I think it’s going to come out with… There’s a lot of parties to bring together. There’s the people manufacturing the technology, that’s the party we’re in. There’s governments putting the rubber stamps on it, sort of insurance companies who are great at math, great at really crunching the numbers of what the real performance is, and I think that ultimately people have to experience. So, if you look at people who’ve been in one of the taxi services like Waymo or Cruise, they often come back with a very, very positive impression, probably more positive than is deserved by the technology today. And so, I think part of it is people have to integrate it into their lives, and I think it’s going to be personal experiences that get people actually comfortable and have a realistic view of where the technology is.

Mike Vizard: All right, folks, you heard it here, advances are coming. Just remember, there’s a lot of people out on the road, either because they don’t have the reflexes they used to have, or they’re just flat out inebriated who shouldn’t be there. So, maybe self-driving cars will make us all safer one way or the other. John, thanks for being on the show.

John Hayes: Thank you, good to talk.

Mike Vizard: And thank you all for watching the latest episode of the Techstrong AI Leadership Series. You can find this episode and others on our website. We invite you to check them all out. Until then, we’ll see you next time.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Extra at Cisco Live EMEA

TECHSTRONG AI PODCAST

SHARE THIS STORY