Robot as Vehicle

  |  Command Line Heroes Team  
IT 기술 역사

Command Line Heroes • • Robot as Vehicle | Command Line Heroes

Robot as Vehicle | Command Line Heroes

About the episode

Self-driving cars are seemingly just around the corner. These robots aren’t quite ready for the streets. For every hyped-up self-driving showcase, there’s a news story about its failure. But the good news is that we get closer every year.

Alex Davies steers us through the history of autonomous vehicles. Alex Kendall maps the current self-driving landscape. And Jason Millar takes us under the hood of these robots’ computers to better understand how they make decisions.

Command Line Heroes Team Red Hat original show

구독

Subscribe here:

Listen on Apple Podcasts Listen on Spotify Subscribe via RSS Feed

자막

New horizons, new ways of living, wondering, searching, exploring. Far distant view of... [voice fades]. Welcome. Welcome. Futurama exhibit, right this way. See the automobile of the future. By 1960, your vehicle will drive itself. Oh! The 1939 World's Fair in New York City was crazy optimistic about the future of tech. Five million people visited the Futurama exhibit, where General Motors was laying out a utopian vision of what they called magical motorways, a web of enormous highways would weave American cities together. And by 1960, cars would be automated. That was the promise. We're still waiting for those self-driving cars, but that's because getting to Futurama isn't just about improving our vehicles. It's about realizing a robot revolution. I'm Saron Yitbarek. And this is Command Line Heroes, an original podcast from Red Hat. Let's get one thing straight, yes, self-driving cars are robots. They make independent decisions. They navigate the real world. They might not look like robots in the movies, like R2-D2, but that's something we've seen over and over again this season. The robots we imagined rarely line up with our robots in real life. For our season finale, we've saved a topic with more hype than any other in the world of robotics right now: the self-driving vehicle. For nearly a century, we've been promised cars that drive themselves. And as we work through the last miles on this journey, we're learning just how transformative a robot revolution will be. The first cars in the 19th century were called horseless carriages. That was the big breakthrough, a vehicle with no horses required. The upside was obvious, no manure on the street and a lot more quote, unquote, "horsepower" under the hood. But there was a downside too. You got rid of a sentient being, the horse, which was helping you stay out of danger. People think, "Oh shoot. We don't have a horse anymore, so I now have to be the driver." Alex Davies is the author of Driven: The Race to Create the Autonomous Car. And as he tells it, we lost something kind of amazing when we started driving horseless carriages. Of course, you would have someone at the reins telling the horse, go left, go, right. But if they fell asleep or fell out of the seat, the horse isn't going to walk into a wall or off a cliff, which is very much what you get when you have a human driven car. So from the very dawn of the automobile industry, there is something missing, that extra brain with an awareness for danger. And that's why, early on, we started trying to bring back an external intelligence. Some of the first examples of self-driving cars you see showed up in the '20s and the '30s, although self-driving is probably a generous description. Some of these were actually radio controlled vehicles. So you would have someone in another car nearby sending electronic signals to the vehicle to tell it to accelerate, or brake, or turn left or right. Those were sometimes called phantom cars. Bit of a cheat. You've still got a human driver, they're just not in the car. In the 1950s and '60s, people started experimenting with putting magnets underneath the pavement in highways. And the idea was that the car could follow the magnetic signals and it would be similar to a train running on tracks. Okay, slightly more autonomous, but still, if it's basically a train running on a track, then why not just take the train? And besides, they weren't about to rip up every mile of highway in the United States and bury magnets underneath. Norman Bel Geddes, the industrial designer hired by General Motors for the Futurama exhibit at the World's Fair, imagined that by 1960, we'd be zipping across whole countries at the push of a button. Maybe we'd be chauffeured by robots. One way or another, the driving experience would be robotized. But Geddes was wrong. By 1960, we weren't anywhere near his vision. He was right about the need for robotic aid though. Driving has always been a dangerous part of our lives. Today, worldwide, they're around 1.3 million fatalities from car crashes every year. Almost all of them are caused by human error. And that's something robots could help solve. Our self-driving solution got a bit closer in the 1980s, when powerful computers weren't taking up whole rooms anymore. You could fit them into something, say, the size of a car. The first thing I think you would call a self-driving car came out of Carnegie Mellon University. In the 1980s, they started a program called the Navigational Laboratory, what they abbreviated to Navlab. And they created a whole series of vehicles. Navlab 1 was different from most robots that had been built before it, because it was a moving vehicle that made its own decisions about how to drive. And it was the first robot that was big enough that people could actually sit inside it. So a faster computer, a smaller computer, these are key if you want to build a self-driving car. But there's another essential ingredient too. And that's what we'll explore next. Carnegie Mellon's vehicles were groundbreaking, but they were still pretty rudimentary. The Navlab 1 looked like an ambulance that got attacked by a satellite dish. And while it could navigate simple roads in good weather, it wasn't ready for everyday driving. What was needed was better sensors and more sophisticated software. The cars needed to see and understand the world around them in much more detail. Throughout the '90s and 2000s, the technology continued to improve. Sensors got better, computers got faster, and the software got more sophisticated. But it was still largely a research project. The cars could drive in controlled environments, but they weren't ready for the chaos of real-world traffic. Then in 2004, something changed. DARPA, the Defense Advanced Research Projects Agency, announced a challenge. They would offer a million dollar prize to anyone who could build a car that could drive itself across the Mojave Desert. It was called the DARPA Grand Challenge. The first DARPA Grand Challenge in 2004 was a bit of a disaster. None of the cars made it more than a few miles. The furthest any vehicle got was 7.32 miles out of the 142-mile course. But DARPA tried again the next year, and this time, five cars finished the course. That second challenge proved that autonomous vehicles were possible. And it sparked a new wave of interest and investment in the technology. Companies like Google started their own self-driving car projects, and suddenly, what had been a niche research area became a major focus of the tech industry. Today, we're seeing the results of all that investment. Companies like Waymo, Uber, and Tesla are testing self-driving cars on public roads. Some cities have autonomous taxi services. And every major automaker has some kind of self-driving car program. We're really at an inflection point now where the technology is getting good enough that we can start to deploy these systems in the real world. Alex Kendall is the co-founder and CEO of Wayve, a company that's working on self-driving car technology. He believes we're closer than ever to having truly autonomous vehicles on our roads. The key breakthrough has been in machine learning and artificial intelligence. We can now train computers to drive by showing them millions of examples of how humans drive. This is much more powerful than trying to program every possible driving scenario by hand. Machine learning has indeed been a game-changer for self-driving cars. Instead of trying to anticipate every possible situation a car might encounter, engineers can train the car's computer to learn from experience, just like a human driver does. With machine learning, we can give a car the ability to generalize from the data it's seen to handle new situations that it's never encountered before. This is crucial for driving, because you can never anticipate every possible scenario on the road. But machine learning also brings new challenges. When a computer makes decisions based on patterns it's learned from data, rather than explicit rules, it can be difficult to understand why it makes certain choices. This is what experts call the "black box" problem. The challenge with machine learning is that we often don't know why the system is making the decisions it's making. Jason Millar is an assistant professor in the School of Electrical Engineering and Computer Science at the University of Ottawa. He studies the ethical and social implications of autonomous vehicles. When you have a system that's been trained on millions of examples, and it makes a decision in a particular situation, it can be very difficult to trace back and understand exactly why it made that choice. This can be problematic from a safety and accountability perspective. This lack of transparency is a real concern when it comes to self-driving cars. If a car makes a mistake that causes an accident, we need to be able to understand why it happened so we can prevent it from happening again. There's a tension between the efficiency and effectiveness of machine learning approaches and the need for transparency and accountability in safety-critical systems like autonomous vehicles. But it's not just about technical challenges. There are also social and ethical questions that need to be addressed. For example, how should a self-driving car be programmed to behave in a situation where an accident is unavoidable? Should it prioritize the safety of its passengers over pedestrians? These are the kinds of moral dilemmas that engineers are having to grapple with. These are not just technical problems. They're social and ethical problems. And they require input from ethicists, policymakers, and the public, not just engineers. The good news is that the self-driving car industry is starting to take these concerns seriously. Companies are investing in research on AI safety and ethics, and there are ongoing discussions about how to regulate and oversee the development of autonomous vehicles. We need to make sure that as we develop this technology, we're doing it in a way that's safe, ethical, and beneficial for society as a whole. It's not enough to just build cars that can drive themselves. We need to build cars that drive themselves in a way that we can trust and that aligns with our values. Despite these challenges, progress continues. Self-driving cars are already being tested on public roads in many cities around the world. Some companies are offering limited autonomous taxi services. And the technology continues to improve. We're still not at the point where you can buy a fully self-driving car and just take a nap in the driver's seat. But we're getting closer. The technology is improving rapidly, and I think we'll see significant progress in the next decade. The question isn't really if self-driving cars will become widespread, but when. And perhaps more importantly, how will they change our world when they do? I think autonomous vehicles have the potential to transform not just transportation, but our entire relationship with mobility. They could make transportation more efficient, more accessible, and safer. They could reduce the need for car ownership and change how we design our cities. Indeed, the implications go far beyond just having cars that drive themselves. Self-driving vehicles could reshape urban planning, reduce traffic congestion, make transportation more accessible for people with disabilities, and even help address climate change by making shared, electric transportation more viable. The really exciting thing about autonomous vehicles is not just that they'll make driving safer or more convenient. It's that they'll enable new ways of thinking about transportation that we haven't even imagined yet. But there are also potential downsides to consider. Self-driving cars could lead to job losses for professional drivers. They could increase urban sprawl if people are more willing to live far from city centers if they can work during their commute. And there are concerns about privacy and surveillance, given that these vehicles will be collecting vast amounts of data about where people go and what they do. We need to be thoughtful about how we deploy this technology. Just because we can build self-driving cars doesn't mean we should deploy them without careful consideration of the social implications. These are complex challenges that don't have easy answers. But they're the kinds of challenges that our society will need to work through as we move toward a more automated future. The key is to make sure that as we develop this technology, we're involving all stakeholders in the conversation. We need input from engineers, policymakers, ethicists, and the public to make sure we're building a future that works for everyone. So, machine learning has become this incredible tool for making sense of the world. It's able to process complexity in a way that would be impossible using traditional rule-based programming. Alex Kendall explains why machine learning is so much more powerful for this kind of task. If you think about all the edge cases and scenarios that you could encounter when driving, it's just not practical to try to write rules for all of them. You'd need millions of lines of code, and you'd still miss something. With machine learning, you can train a system to handle all of these scenarios automatically. But it's worth noting that even with all this sophisticated technology, self-driving cars still have limitations. They struggle in certain weather conditions, like heavy snow or rain. They can have difficulty with construction zones or other unexpected situations. And they still require human oversight and intervention in many cases. We're still in what experts call Level 2 or Level 3 autonomy for most commercial systems. That means the car can handle many driving tasks, but a human driver still needs to be ready to take over at any moment. Full Level 5 autonomy, where a car can drive itself in all conditions without any human intervention, is still a goal rather than a reality. But significant progress is being made. Every year, these systems get more capable. They can handle more complex scenarios, they work in more weather conditions, and they make fewer mistakes. It's a gradual process, but the trajectory is very clear. Part of what makes this progress possible is the incredible amount of data that self-driving car companies are collecting. Every mile driven by a test vehicle generates vast amounts of information about how to navigate the real world. The more data we collect, the better our systems become. We're essentially teaching cars to drive by showing them millions of examples of good driving behavior. But this reliance on machine learning also creates new challenges. Jason Millar points out that when we don't understand exactly how these systems make decisions, it can be harder to trust them or hold them accountable when things go wrong. There's a trade-off between the power and flexibility of machine learning and the transparency and explainability that we might want in a safety-critical system. This is a key challenge facing the industry. How do you build systems that are both powerful enough to handle the complexity of driving and transparent enough that people can understand and trust them? It would be far easier to get a car to, say, navigate a roundabout using that approach, then it would be if you tried to line by line define and code all the rules that have to be satisfied in order to navigate a roundabout. Millar points out that we often don't know why machine learning works. We just know that it does. And that concerns people, who want to be sure that robots, like self-driving cars, always have our best interests at heart. Thankfully, machine learning is not the only tool we can use. It's possible to pair it with more traditional rule-oriented programming too. You can use more traditional types of programming and very clearly define the rules that a vehicle is going to abide by, or the types of driving characteristics that it's going to have. In which case, you have quite a bit of transparency, at least in terms of how the vehicle will behave and what rules it's following. With machine learning and these kind of black box approaches like the neural net approaches to doing coding, we don't have that. A slick piece of machine learning doesn't necessarily tell us why it arrives at a certain behavior. And that matters because... Efficiency and robustness from an engineering perspective doesn't translate directly into trust and trustworthiness from a public regulatory perspective. So a few transparent pieces of rules-based programming on top of that machine learning can go a long way to engender trust when these cars go out into the wider world. For example, you might use machine learning to let a vehicle teach itself lane changing, but then have an explicit rule that limits how close you get to other cars. And so if you're starting with these kind of abstract principles, the reason you would do that is to signal to people and to regulators or whoever you're trying to get to trust the system that look, this system has certain principles designed into it that align with your expectations in terms of an ethical system. That hybrid approach dancing between machine learning and more transparent programming could be a sweet spot, where the amazing robotic future gets welcomed into our everyday lives. So whether we get there via machine learning or rule-oriented programming, or some mix of the two, we wanted to finally get an answer to a question you might be asking. When am I going to get my own self-driving car? When do regular people get to snooze in the driver's seat? Our experts all sort of told us the same thing. That's the wrong question. Right. Think bigger. The question isn't when do I get my self-driving car? The question is how are robotic vehicles going to transform, well, everything? That's what's so exciting about robots, their agency. Their ability to interact with a larger world invites us to think at the biggest scale we can imagine. I mentioned earlier, when automobiles first showed up everybody just called them horseless carriages. They were still comparing everything to a 19th century technology. The same thing happens when we talk about self-driving cars today. We're imagining a driver-free experience, but we're not imagining how the whole paradigm of transportation can change. Here is Alex Davies, one last time. Just as a horseless carriage was very limited in that it didn't think about all of the things that the car could ultimately do, that it could become this inspiration for art and a version of art itself, and it could drive the creation of the American suburbs, and it could create entirely new sports and ways of moving around the world, I think we don't know very much yet about what the autonomous vehicle can do. Untying the carriage from the horse allowed for a century's worth of innovations. And where we are right now in the progression of the self-driving car is untying the car from the human driver. Experts told us that someday we may move about in fleets of vehicles owned by the city or companies. And our groceries might travel in autonomous vehicles way more often than people do. Now, take that kind of change and try applying it to everything. The point is, cities and daily life are going to be remade, not just by autonomous vehicles, but by robots that are currently spinning in a piece of simulation software waiting to be born. Life is about to change in ways we're only beginning to comprehend. Whether we're talking transportation or healthcare or economics, the change is just as radical as when cars remade the 20th century. And that's kind of awesome. It's one of our generation's greatest engineering challenges. And if we manage to overcome all those computational and theoretical and psychological barriers, there is no telling how far our robots could take us. All season, we've been trying to separate robot facts from robot fiction. The old hype about what robots can be was often wildly wrong. Hey, give me a beer, would you? My pleasure. But it did propel innovation. And our innovations have delivered a robot reality that's just as fantastic. This season, we've discovered robots that have already become essential coworkers, making jobs safer and more productive. Others offer companionship or replace parts of the human body. And as you've just heard, they're on the verge of remaking the whole field of transportation too. It's all happening because robotics has massively opened up over the past few decades. Thanks to simulation software like Gazebo, or open source projects like ROSS, or competitions like the DARPA Challenge, whole new crowds of command line heroes are joining the field. And personally, I can't wait to see what they dream up next. I'm Saron Yitbarek. And that's it for Season 8 of Command Line Heroes, but Season 9 is already in the works. Subscribe wherever you get your podcasts and you won't miss an episode. Until then, keep on coding.

About the show

Command Line Heroes

During its run from 2018 to 2022, Command Line Heroes shared the epic true stories of developers, programmers, hackers, geeks, and open source rebels, and how they revolutionized the technology landscape. Relive our journey through tech history, and use #CommandLinePod to share your favorite episodes.