Google has united the technologies behind the best competitors at the Grand Challenge, and crafted something amazing.
The thought of a self-driving car is about as revolutionary as a horseless carriage – but we’ve been expecting it in science fiction for a long time now. The advent of this technology in a near-to-commercializable and highly reliable version makes relevant a discussion of the possible effects.
Transport of Goods
Trucks have very low visibility. Truckers have to stop to eat, stop to use the washroom, stop to sleep, and can make mistakes. For all of this, they have to be highly trained and likewise, highly paid. Materials and objects need to be sent from one place to another, and always will. Google’s automatic cars would remedy nearly every one of these issues. Trucks could drive faster, with no breaks, without mistakes, without a per-truck hourly fee of a driver.
It seems likely that this is the first place where automation will strike. The trucking industry is entirely about moving goods, and any company willing to put vehicles onto the road that doesn’t have the cost of drivers, doesn’t lose packages to theft or accident, and that can send and receive faster than its competitors by virtue of not having pee breaks will outperform its competitors. Any failing competitor will need to change to the newer model, and the unions can’t do anything about it. They will become, sadly for them, irrelevant.
The chief obstacle is laws, which are hazy on whether or not a driver actually needs to be in a vehicle at the time it is being driven.
Transport of People, Public
Buses and Taxis are a huge amount of very serious business. Steps toward automation have already been taken – bus ticket purchases are largely done online, and there are systems like UberCab in SanFrancisco which allow you to automatically request the nearest black-car taxi to your location, provide the destination to the driver, show them approach on google maps, then confer payment through credit card and email you a receipt. No money changes hands, no talk is had with the driver if you don’t wish it. Everything is automatic. All that’s left to do is drop the requirement of having a driver.
Buses have a similar set of arguments to Trucks. Less need to stop, no one to pay, but most importantly: much higher visibility and no mistakes. Imagine a world where bus crashes didn’t happen. Wouldn’t it be great? A computer system can guarantee the best possible response to any situation — human eyes, reaction times, hands, and level of knowledge just can not compete with the sensibility of a properly designed automatic system.
Taxis are the same deal on a smaller scale, but consequently have a higher proportion of savings. More drivers for fewer people means that the economic pressure to cut drivers out of the mix is far higher than on buses. The risks associated with mistakes are similarly high – but perhaps a taxi driver’s ability to ignore particular rules of the road when they seem unimportant is a powerful part of a company’s ability to get people to places fast. Regardless, I think that taxi automation will come somewhere between second and third.
Transport of People, Private
Your city. Your neighbourhood. Your road. Your car. No drivers, just vehicles going places. This comes down to an extremely personal level and raises some troubling (and fun) questions. For example, let’s say you’re drunk. Should your car stop you from driving at all, and force you to let it handle everything? What if you’re not really drunk, but you’ve just had a /single/ drink? Further questions about the limitations of driver-control have to be asked — if studies end up showing that drivers are reliably worse than cars themselves, should we ever be allowed to drive? Maybe only in inclement weather, or maybe that’s a particular restriction! We can’t know a lot of these things yet, and it’s a scary thought that so much could hang in the balance.
Then there’s the question of bugs. What if something is wrong somewhere in the software? Certain types of software can be mathematically proven to be free of entire classes of error, and it would be marvellous if those sorts of techniques could be used here. If they cannot though, who will take responsibility for a failure? Will the engineer who wrote the code shoulder the blame for a bug found in a self-driving car’s software? What about the company they work for? Will the driver share blame for not overtaking the car when it clearly does something wrong? These are all valid questions. Perhaps an unintended consequence will be that the standardization of software creation, and the guarantees of safety and testing that have been called for so long in our industry will finally materialize, and software development will reach a renaissance.
😛
Synergies
There’s some cool thoughts that arise from all of these things together — other sorts of vehicle may end up being created which we’ve never conceived of and would be impossible without some kind of automated driver. Vehicles will be able to travel at far higher speeds, nearly everywhere. Traffic jams can, and will be erased from memory. Instead we’ll have streams of high speed, unmanned vehicles navigating our graph of roads, communicating to find the optimal solution. Two cars driving down a highway late at night side by side could cooperate if a deer emerged onto the road, one swerving in a mechanically perfect fashion while the other moved out of the way of the primarily affected vehicle. None of this is possible today, but certainly will be soon.
Conclusion
Could this change destroy our sense of freedom? Will it just add another failing component to the complex system that is the modern automobile, which some (but not many) people actually use? Maybe it’ll be good enough to use on sunny days in certain locales, but not everywhere. Time will tell. For now though, Google’s got their autos driving around San Francisco, and I think it’s very cool stuff. It’ll affect jobs, the economy, and our personal lives profoundly.