Why Development of Self-Driving Cars is Stuck in the Slow Lane

0


1.35 million people die every year on the world’s roads. Bit peak. Since the dawn of the automobile, attempts to reduce the number of road deaths have included mandatory driving tests and mandatory safety features in cars. However, a fundamentally different approach is required. In a 2011 survey showcasing a stunning example of illusory superiority, less than 1% of drivers considered themselves to have below average driving ability. Clearly, the problem here is rooted in allowing people to drive at all.

And there are more issues than just the death rate. Traffic is annoying and costly, and the main cause is often cars failing to pull away in unison after slowing down/stopping. There is also the financial problem. In the UK, the average used car price is £19,250, while the average salary is only £25,750 (and I’m generously ignoring new cars here). Clearly, a better solution would be to move away from private ownership of cars and head towards a world of cheap, autonomous taxis.

There are a few problems holding up the development of self-driving. These include creating the technology which enables it, passing legislation which allows for the tech to be used on the road, and answering moral questions about the decisions these cars can make. There have been a few approaches made to solve the first problem. The most popular of these involves LiDAR (Light Detection And Ranging), which uses lasers reflecting off the surroundings to generate a 3D map of the area, and to work out if the car is about to collide with something. The main problem with LiDAR is that the most popular sensor, produced by Velodyne, costs around $8,000, making it next to impossible to use in an affordable car. Cheaper sensors are available, but their reduced accuracy results in unsafe driving at highway speeds. Another issue is that LiDAR is inherently unable to read road signs, or road markings.

As cars and roads were designed with human vision in mind, a vision-based approach makes more sense. Tesla and other companies are working on self-driving using cameras, coupled with some affordable sensors. This is much cheaper than LiDAR, and allows for cars to learn object recognition, and to read traffic lights and signs etc. The challenge here is rooted in the software for interpreting the information from the sensors, essentially an AI which learns how to drive. This requires deep learning, which in turn requires huge datasets. This is where Tesla has a large advantage over the competition, as they have 3 billion miles of real-world autopilot data (autopilot is simpler than full self-driving, but the data is still highly valuable). Other companies are relying solely on simulation data, which is cheaper and easier to make progress with, but is less likely to ‘surprise’ the AI (a requirement to avoid overfitting and ensure thorough safety).

Another approach is to use C-ITS (Co-operative Intelligent Transport Systems). This means focussing not on the car, but on self-driving through communication with the driving environment. For example, construction cones that tell the car where the construction area and temporary lanes are. This would also involve smart traffic lights and communication from car to car. C-ITS is a good idea, as it greatly reduces the dependency of each individual car to learn how to drive, but it suffers in that every road would require installation of smart driving features. This would be an expensive and lengthy process.

Tesla has the lead on self-driving technology, but its progress has been held up legally. Teslas on autopilot have 1 crash every 4.3 million miles, compared to 1 in every 0.5 million for other cars. However, every time a Tesla on autopilot crashes, it makes the news. This suggests that many people have strong feelings about allowing self-driving cars on the road, and these feelings are reinforced by the legislation. The general argument being made is that self-driving cars need to prove themselves to be safe before they can be tested on the road and potentially risk human lives. But this is seriously holding up progress (and the same argument is not made for human learners). There are also legal issues with liability, insurance, privacy, cybersecurity & ransomware, all of which will need tackling.

Another problem is moral decisions. The classic example is a scenario where self-driving cars must choose between killing a pedestrian or a passenger. A 2016 paper on the social dilemma of autonomous vehicles found, rather perplexingly, that people wanted self-driving cars to value pedestrians over passengers, but that they would not buy self-driving cars programmed this way. Another complication is that global surveys of millions of people (answering this and similar questions) suggest that ethics vary from country to country, and so self-driving ethics must adapt accordingly. However, it is worth noting that scenarios such as a choice between murders are incredibly rare, with critics suggesting that it is akin to worrying about how a self-driving car will deal with an asteroid strike.

So, there are a multitude of problems, some of which will be overcome with time, and some which require a tremendous effort of engineering. But self-driving cars are coming. They’re just stuck in traffic hehehe.

avatar

Leave A Reply