A self-driving car, also known as an autonomous vehicle (AV), driverless car, or robotic car (robo-car), is a car incorporating vehicular automation, that is, a ground vehicle that is capable of sensing its environment and moving safely with little or no human input. The future of this technology may have an impact on multiple industries and other circumstances.
Self-driving cars combine a variety of sensors to perceive their surroundings, such as thermographic cameras, radar, lidar, sonar, GPS, odometry and inertial measurement units. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage.
Possible implementations of the technology include personal self-driving vehicles, shared robotaxis, and connected vehicle platoons. Several projects to develop a fully self-driving commercial car are in various stages of development, but there are no self-driving cars available for everyday consumers.
Autonomy in vehicles is often categorized in six levels, according to a system developed by SAE International (SAE J3016, revised periodically). The SAE levels can be roughly understood as Level 0 - no automation; Level 1 - hands on/shared control; Level 2 - hands off; Level 3 - eyes off; Level 4 - mind off, and Level 5 - steering wheel optional.
As of December 2021[update], vehicles operating at Level 3 and above remain a marginal portion of the market. Waymo became the first service provider to offer driver-less taxi rides to the general public in a part of Phoenix, Arizona in 2020. However, while there is no driver in the car, the vehicles still have remote human overseers. In March 2021, Honda became the first manufacturer to provide a legally approved Level 3 vehicle, and Toyota operated a potentially Level 4 service around the Tokyo 2020 Olympic Village. Nuro has been allowed to start autonomous commercial delivery operations in California in 2021. In December 2021, Mercedes-Benz became the second manufacturer to receive legal approval for a Level 3 complying with legal requirements.
In China, two publicly accessible trials of robotaxis have been launched, in 2020 in Shenzhen's Pingshan District by Chinese firm AutoX and in 2021 at Shougang Park in Beijing by Baidu, a venue for the 2022 Winter Olympics.
Main article: History of self-driving cars
Experiments have been conducted on automated driving systems (ADS) since at least the 1920s; trials began in the 1950s. The first semi-automated car was developed in 1977, by Japan's Tsukuba Mechanical Engineering Laboratory, which required specially marked streets that were interpreted by two cameras on the vehicle and an Analog computer. The vehicle reached speeds up to 30 kilometres per hour (19 mph) with the support of an elevated rail.
A landmark autonomous car appeared in the 1980s, with Carnegie Mellon University's Navlab and ALV projects funded by the United States' Defense Advanced Research Projects Agency (DARPA) starting in 1984 and Mercedes-Benz and Bundeswehr University Munich's EUREKA Prometheus Project in 1987. By 1985, the ALV had demonstrated self-driving speeds on two-lane roads of 31 kilometres per hour (19 mph), with obstacle avoidance added in 1986, and off-road driving in day and night time conditions by 1987. A major milestone was achieved in 1995, with CMU's NavLab 5 completing the first autonomous coast-to-coast drive of the United States. Of the 2,849 mi (4,585 km) between Pittsburgh, Pennsylvania and San Diego, California, 2,797 mi (4,501 km) were autonomous (98.2%), completed with an average speed of 63.8 mph (102.7 km/h). From the 1960s through the second DARPA Grand Challenge in 2005, automated vehicle research in the United States was primarily funded by DARPA, the US Army, and the US Navy, yielding incremental advances in speeds, driving competence in more complex conditions, controls, and sensor systems. Companies and research organizations have developed prototypes.
The US allocated US$650 million in 1991 for research on the National Automated Highway System, which demonstrated automated driving through a combination of automation embedded in the highway with automated technology in vehicles, and cooperative networking between the vehicles and with the highway infrastructure. The program concluded with a successful demonstration in 1997 but without clear direction or funding to implement the system on a larger scale. Partly funded by the National Automated Highway System and DARPA, the Carnegie Mellon University Navlab drove 4,584 kilometres (2,848 mi) across America in 1995, 4,501 kilometres (2,797 mi) or 98% of it autonomously. Navlab's record achievement stood unmatched for two decades until 2015, when Delphi improved it by piloting an Audi, augmented with Delphi technology, over 5,472 kilometres (3,400 mi) through 15 states while remaining in self-driving mode 99% of the time. In 2015, the US states of Nevada, Florida, California, Virginia, and Michigan, together with Washington, DC, allowed the testing of automated cars on public roads.
From 2016 to 2018, the European Commission funded an innovation strategy development for connected and automated driving through the Coordination Actions CARTRE and SCOUT. Moreover, the Strategic Transport Research and Innovation Agenda (STRIA) Roadmap for Connected and Automated Transport was published in 2019.
In November 2017, Waymo announced that it had begun testing driverless cars without a safety driver in the driver position; however, there was still an employee in the car. An October 2017 report by the Brookings Institution found that the $80 billion had been reported as invested in all facets of self driving technology up to that point, but that it was "reasonable to presume that total global investment in autonomous vehicle technology is significantly more than this."
In October 2018, Waymo announced that its test vehicles had travelled in automated mode for over 10,000,000 miles (16,000,000 km), increasing by about 1,000,000 miles (1,600,000 kilometres) per month. In December 2018, Waymo was the first to commercialize a fully autonomous taxi service in the US, in Phoenix, Arizona. In October 2020, Waymo launched a geo-fenced driverless ride hailing service in Phoenix. The cars are being monitored in real-time by a team of remote engineers, and there are cases where the remote engineers need to intervene.
In March 2019, ahead of the autonomous racing series Roborace, Robocar set the Guinness World Record for being the fastest autonomous car in the world. In pushing the limits of self-driving vehicles, Robocar reached 282.42 km/h (175.49 mph) – an average confirmed by the UK Timing Association at Elvington in Yorkshire, UK.
In 2020, a National Transportation Safety Board chairman stated that no self-driving cars (SAE level 3+) were available for consumers to purchase in the US in 2020:
There is not a vehicle currently available to US consumers that is self-driving. Period. Every vehicle sold to US consumers still requires the driver to be actively engaged in the driving task, even when advanced driver assistance systems are activated. If you are selling a car with an advanced driver assistance system, you’re not selling a self-driving car. If you are driving a car with an advanced driver assistance system, you don’t own a self-driving car.
On 5 March 2021, Honda began leasing in Japan a limited edition of 100 Legend Hybrid EX sedans equipped with the newly approved Level 3 automated driving equipment which had been granted the safety certification by Japanese government to their autonomous "Traffic Jam Pilot" driving technology, and legally allow drivers to take their eyes off the road.
There is some inconsistency in the terminology used in the self-driving car industry. Various organizations have proposed to define an accurate and consistent vocabulary.
In 2014, such confusion has been documented in SAE J3016 which states that "some vernacular usages associate autonomous specifically with full driving automation (Level 5), while other usages apply it to all levels of driving automation, and some state legislation has defined it to correspond approximately to any ADS [automated driving system] at or above Level 3 (or to any vehicle equipped with such an ADS)."
Modern vehicles provide features such as keeping the car within its lane, speed controls, or emergency braking. Those features alone are just considered as driver assistance technologies because they still require a human driver control while fully automated vehicles drive themselves without human driver input.
According to Fortune, some newer vehicles' technology names—such as AutonoDrive, PilotAssist, Full-Self Driving or DrivePilot—might confuse the driver, who may believe no driver input is expected when in fact the driver needs to remain involved in the driving task. According to the BBC, confusion between those concepts leads to deaths.
For this reason, some organizations such as the AAA try to provide standardized naming conventions for features such as ALKS which aim to have capacity to manage the driving task, but which are not yet approved to be an automated vehicles in any countries. The Association of British Insurers considers the usage of the word autonomous in marketing for modern cars to be dangerous because car ads make motorists think 'autonomous' and 'autopilot' mean a vehicle can drive itself when they still rely on the driver to ensure safety. Technology able to drive a car is still in its beta stage.
Some car makers suggest or claim vehicles are self-driving when they are not able to manage some driving situations. Despite being called Full Self-Driving, Tesla stated that its offering should not be considered as a fully autonomous driving system. This makes drivers risk becoming excessively confident, taking distracted driving behavior, leading to crashes. While in Great-Britain, a fully self-driving car is only a car registered in a specific list. There have also been proposals to adopt the aviation automation safety knowledge into the discussions of safe implementation of autonomous vehicles, due to the experience that has been gained over the decades by the aviation sector on safety topics.
According to the SMMT, "There are two clear states – a vehicle is either assisted with a driver being supported by technology or automated where the technology is effectively and safely replacing the driver.".
Autonomous means self-governing. Many historical projects related to vehicle automation have been automated (made automatic) subject to a heavy reliance on artificial aids in their environment, such as magnetic strips. Autonomous control implies satisfactory performance under significant uncertainties in the environment, and the ability to compensate for system failures without external intervention.
One approach is to implement communication networks both in the immediate vicinity (for collision avoidance) and farther away (for congestion management). Such outside influences in the decision process reduce an individual vehicle's autonomy, while still not requiring human intervention.
As of 2017[update], most commercial projects focused on automated vehicles that did not communicate with other vehicles or with an enveloping management regime. EuroNCAP defines autonomous in "Autonomous Emergency Braking" as: "the system acts independently of the driver to avoid or mitigate the accident", which implies the autonomous system is not the driver.
In Europe, the words automated and autonomous might be used together. For instance, Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles (...) defines "automated vehicle" and "fully automated vehicle" based on their autonomous capacity:
In British English, the word automated alone might have several meaning, such in the sentence: "Thatcham also found that the automated lane keeping systems could only meet two out of the twelve principles required to guarantee safety, going on to say they cannot, therefore, be classed as ‘automated driving’, instead it claims the tech should be classed as ‘assisted driving’.": The first occurrence of the "automated" word refers to an Unece automated system, while the second occurrence refers to the British legal definition of an automated vehicle. The British law interprets the meaning of "automated vehicle" based on the interpretation section related to a vehicle "driving itself" and an insured vehicle.
To enable a car to travel without any driver embedded within the vehicle, some companies use a remote driver.
According to SAE J3016,
Some driving automation systems may indeed be autonomous if they perform all of their functions independently and self-sufficiently, but if they depend on communication and/or cooperation with outside entities, they should be considered cooperative rather than autonomous.
PC Magazine defines a self-driving car as "a computer-controlled car that drives itself." The Union of Concerned Scientists states that self-driving cars are "cars or trucks in which human drivers are never required to take control to safely operate the vehicle. Also known as autonomous or 'driverless' cars, they combine sensors and software to control, navigate, and drive the vehicle."
The British Automated and Electric Vehicles Act 2018 law defines considers a vehicle as "driving itself" if the vehicle "is operating in a mode in which it is not being controlled, and does not need to be monitored, by an individual".
A classification system with six levels – ranging from fully manual to fully automated systems – was published in 2014 by standardization body SAE International as J3016, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems; the details are revised periodically. This classification is based on the amount of driver intervention and attentiveness required, rather than the vehicle's capabilities, although these are loosely related. In the United States in 2013, the National Highway Traffic Safety Administration (NHTSA) had released its original formal classification system. After SAE updated its classification in 2016, called J3016_201609, NHTSA adopted the SAE standard, and SAE classification became widely accepted.
In SAE's automation level definitions, "driving mode" means "a type of driving scenario with characteristic dynamic driving task requirements (e.g., expressway merging, high speed cruising, low speed traffic jam, closed-campus operations, etc.)"
In the formal SAE definition below, an important transition is from SAE Level 2 to SAE Level 3 in which the human driver is no longer expected to monitor the environment continuously. At SAE 3, the human driver still has responsibility to intervene when asked to do so by the automated system. At SAE 4 the human driver is always relieved of that responsibility and at SAE 5 the automated system will never need to ask for an intervention.
|SAE Level||Name||Narrative definition||Execution of
|Monitoring of driving environment||Fallback performance of dynamic driving task||System capability (driving modes)|
|Human driver monitors the driving environment|
|0||No Automation||The full-time performance by the human driver of all aspects of the dynamic driving task, even when "enhanced by warning or intervention systems"||Human driver||Human driver||Human driver||n/a|
|1||Driver Assistance||The driving mode-specific execution by a driver assistance system of "either steering or acceleration/deceleration"||using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task||Human driver and system||Some driving modes|
|2||Partial Automation||The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration||System|
|Automated driving system monitors the driving environment|
|3||Conditional Automation||The driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task||with the expectation that the human driver will respond appropriately to a request to intervene||System||System||Human driver||Some driving modes|
|4||High Automation||even if a human driver does not respond appropriately to a request to intervene the car can pull over safely by guiding system||System||Many driving modes|
|5||Full Automation||under all roadway and environmental conditions that can be managed by a human driver||All driving modes|
The SAE Automation Levels have been criticized for their technological focus. It has been argued that the structure of the levels suggests that automation increases linearly and that more automation is better, which may not always be the case. The SAE Levels also do not account for changes that may be required to infrastructure and road user behavior.
The characteristics of autonomous vehicles, as digital technology, are distinguishable from other types of technologies and vehicles. These characteristics mean autonomous vehicles are able to be more transformative and agile to possible changes. The characteristics include hybrid navigation, homogenization and decoupling, vehicle communication systems, reprogrammable and smart, digital traces and modularity.
Main article: Hybrid navigation
There are different systems that help the self-driving car control the car, including the car navigation system, the location system, the electronic map, the map matching, the global path planning, the environment perception, the laser perception, the radar perception, the visual perception, the vehicle control, the perception of vehicle speed and direction, and the vehicle control method.
Driverless car designers are challenged with producing control systems capable of analyzing sensory data in order to provide accurate detection of other vehicles and the road ahead. Modern self-driving cars generally use Bayesian simultaneous localization and mapping (SLAM) algorithms, which fuse data from multiple sensors and an off-line map into current location estimates and map updates. Waymo has developed a variant of SLAM with detection and tracking of other moving objects (DATMO), which also handles obstacles such as cars and pedestrians. Simpler systems may use roadside real-time locating system (RTLS) technologies to aid localization. Typical sensors include lidar (Light Detection and Ranging), stereo vision, GPS and IMU. Control systems on automated cars may use Sensor Fusion, which is an approach that integrates information from a variety of sensors on the car to produce a more consistent, accurate, and useful view of the environment. Heavy rainfall, hail, or snow could impede the car sensors.
Driverless vehicles require some form of machine vision for the purpose of visual object recognition. Automated cars are being developed with deep neural networks, a type of deep learning architecture with many computational stages, or levels, in which neurons are simulated from the environment that activate the network. The neural network depends on an extensive amount of data extracted from real-life driving scenarios, enabling the neural network to "learn" how to execute the best course of action.
In May 2018, researchers from the Massachusetts Institute of Technology announced that they had built an automated car that can navigate unmapped roads. Researchers at their Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system, called MapLite, which allows self-driving cars to drive on roads that they have never been on before, without using 3D maps. The system combines the GPS position of the vehicle, a "sparse topological map" such as OpenStreetMap, (i.e. having 2D features of the roads only), and a series of sensors that observe the road conditions.
During the ongoing evolution of the digital era, certain industry standards have been developed on how to store digital information and in what type of format. This concept of homogenization also applies to autonomous vehicles. In order for autonomous vehicles to perceive their surroundings, they have to use different techniques each with their own accompanying digital information (e.g. radar, GPS, motion sensors and computer vision). Homogenization requires that the digital information from these different sources is transmitted and stored in the same form. This means their differences are decoupled, and digital information can be transmitted, stored, and computed in a way that the vehicles and their operating system can better understand and act upon it.
In international standardization field, ISO/TC 204 is in charge of information, communication and control systems in the field of urban and rural surface transportation in the intelligent transport systems (ITS) field. International standards have been actively developed in the domains of AD/ADAS functions, connectivity, human interaction, in-vehicle systems, management/engineering, dynamic map and positioning, privacy and security.
Main article: Vehicular communication systems
Individual vehicles can benefit from information obtained from other vehicles in the vicinity, especially information relating to traffic congestion and safety hazards. Vehicular communication systems use vehicles and roadside units as the communicating nodes in a peer-to-peer network, providing each other with information. As a cooperative approach, vehicular communication systems can allow all cooperating vehicles to be more effective. According to a 2010 study by the US National Highway Traffic Safety Administration, vehicular communication systems could help avoid up to 79% of all traffic accidents.
There has so far been no complete implementation of peer-to-peer networking on the scale required for traffic.
In 2012, computer scientists at the University of Texas in Austin began developing smart intersections designed for automated cars. The intersections will have no traffic lights and no stop signs, instead of using computer programs that will communicate directly with each car on the road. In the case of autonomous vehicles, it is essential for them to connect with other 'devices' in order to function most effectively. Autonomous vehicles are equipped with communication systems that allow them to communicate with other autonomous vehicles and roadside units to provide them, amongst other things, with information about road work or traffic congestion. In addition, scientists believe that the future will have computer programs that connect and manage each individual autonomous vehicle as it navigates through an intersection.  These types of characteristics drive and further develop the ability of autonomous vehicles to understand and cooperate with other products and services (such as intersection computer systems) in the autonomous vehicles market. This could lead to a network of autonomous vehicles all using the same network and information available on that network. Eventually, this can lead to more autonomous vehicles using the network because the information has been validated through the usage of other autonomous vehicles. Such movements will strengthen the value of the network and are called network externalities.
Among connected cars, an unconnected one is the weakest and will be increasingly banned from busy high-speed roads, as predicted by the Helsinki think tank, Nordic Communications Corporation, in January 2016.
In 2017, Researchers from Arizona State University developed a 1/10 scale intersection and proposed an intersection management technique called Crossroads. It was shown that Crossroads is very resilient to network delay of both V2I communication and Worst-case Execution time of the intersection manager. In 2018, a robust approach was introduced which is resilient to both model mismatch and external disturbances such as wind and bumps.
Vehicle networking may be desirable due to difficulty with computer vision being able to recognize brake lights, turn signals, buses, and similar things. However, the usefulness of such systems would be diminished by the fact current cars are not equipped with them; they may also pose privacy concerns.
Another characteristic of autonomous vehicles is that the core product will have a greater emphasis on the software and its possibilities, instead of the chassis and its engine. This is because autonomous vehicles have software systems that drive the vehicle, meaning that updates through reprogramming or editing the software can enhance the benefits of the owner (e.g. update in better distinguishing blind person vs. non-blind person so that the vehicle will take extra caution when approaching a blind person). A characteristic of this re-programmable part of autonomous vehicles is that the updates need not only to come from the supplier, because through machine learning, smart autonomous vehicles can generate certain updates and install them accordingly (e.g. new navigation maps or new intersection computer systems). These reprogrammable characteristics of the digital technology and the possibility of smart machine learning give manufacturers of autonomous vehicles the opportunity to differentiate themselves on software. This also implies that autonomous vehicles are never finished because the product can continuously be improved.
Autonomous vehicles are equipped with different sorts of sensors and radars. As said, this allows them to connect and interoperate with computers from other autonomous vehicles and/or roadside units. This implies that autonomous vehicles leave digital traces when they connect or interoperate. The data that comes from these digital traces can be used to develop new (to be determined) products or updates to enhance autonomous vehicles' driving ability or safety.
Traditional vehicles and their accompanying technologies are manufactured as a product that will be complete, and unlike autonomous vehicles, they can only be improved if they are redesigned or reproduced. As said, autonomous vehicles are produced but due to their digital characteristics never finished. This is because autonomous vehicles are more modular since they are made up out of several modules which will be explained hereafter through a Layered Modular Architecture. The Layered Modular Architecture extends the architecture of purely physical vehicles by incorporating four loosely coupled layers of devices, networks, services and contents into Autonomous Vehicles. These loosely coupled layers can interact through certain standardized interfaces.
The consequence of layered modular architecture of autonomous vehicles (and other digital technologies) is that it enables the emergence and development of platforms and ecosystems around a product and/or certain modules of that product. Traditionally, automotive vehicles were developed, manufactured and maintained by traditional manufacturers. Nowadays app developers and content creators can help to develop more comprehensive product experience for the consumers which creates a platform around the product of autonomous vehicles.
The potential benefits from increased vehicle automation described may be limited by foreseeable challenges such as disputes over liability, the time needed to turn over the existing stock of vehicles from non-automated to automated, and thus a long period of humans and autonomous vehicles sharing the roads, resistance by individuals to forfeiting control of their cars, concerns about safety, and the implementation of a legal framework and consistent global government regulations for self-driving cars.
Other obstacles could include de-skilling and lower levels of driver experience for dealing with potentially dangerous situations and anomalies, ethical problems where an automated vehicle's software is forced during an unavoidable crash to choose between multiple harmful courses of action ('the trolley problem'), concerns about making large numbers of people currently employed as drivers unemployed, the potential for more intrusive mass surveillance of location, association and travel as a result of police and intelligence agency access to large data sets generated by sensors and pattern-recognition AI, and possibly insufficient understanding of verbal sounds, gestures and non-verbal cues by police, other drivers or pedestrians.
Possible technological obstacles for automated cars are:
Social challenges include:
Self-driving cars are already exploring the difficulties of determining the intentions of pedestrians, bicyclists, and animals, and models of behavior must be programmed into driving algorithms. Human road users also have the challenge of determining the intentions of autonomous vehicles, where there is no driver with which to make eye contact or exchange hand signals. Drive.ai is testing a solution to this problem that involves LED signs mounted on the outside of the vehicle, announcing status such as "going now, don't cross" vs. "waiting for you to cross".
Two human-factor challenges are important for safety. One is the handoff from automated driving to manual driving, which may become necessary due to unfavorable or unusual road conditions, or if the vehicle has limited capabilities. A sudden handoff could leave a human driver dangerously unprepared at the moment. In the long term, humans who have less practice at driving might have a lower skill level and thus be more dangerous in manual mode. The second challenge is known as risk compensation: as a system is perceived to be safer, instead of benefiting entirely from all of the increased safety, people engage in riskier behavior and enjoy other benefits. Semi-automated cars have been shown to suffer from this problem, for example with users of Tesla Autopilot ignoring the road and using electronic devices or other activities against the advice of the company that the car is not capable of being completely autonomous. In the near future, pedestrians and bicyclists may travel in the street in a riskier fashion if they believe self-driving cars are capable of avoiding them.
In order for people to buy self-driving cars and vote for the government to allow them on roads, the technology must be trusted as safe. Self-driving elevators were invented in 1900, but the high number of people refusing to use them slowed adoption for several decades until operator strikes increased demand and trust was built with advertising and features like the emergency stop button. There are three types of trust between human and automation. There is dispositional trust, the trust between the driver and the company's product. Situational trust, or the trust from different scenarios. Finally there is learned trust where the trust is built between similar events.
See also: Machine ethics
With the emergence of automated automobiles, various ethical issues arise. While the introduction of automated vehicles to the mass market is said to be inevitable due to a presumed but untestable potential for reduction of crashes by "up to" 90% and their potential greater accessibility to disabled, elderly, and young passengers, a range of ethical issues have been posed.
There are different opinions on who should be held liable in case of a crash, especially with people being hurt.  Besides the fact that the car manufacturer would be the source of the problem in a situation where a car crashes due to a technical issue, there is another important reason why car manufacturers could be held responsible: it would encourage them to innovate and heavily invest into fixing those issues, not only due to protection of the brand image, but also due to financial and criminal consequences. However, there are also voices[who?] that argue those using or owning the vehicle should be held responsible since they know the risks involved in using such a vehicle. One study suggests requesting the owners of self-driving cars to sign end-user license agreements (EULAs), assigning to them accountability for any accidents. Other studies suggest introducing a tax or insurances that would protect owners and users of automated vehicles of claims made by victims of an accident. Other possible parties that can be held responsible in case of a technical failure include software engineers that programmed the code for the automated operation of the vehicles, and suppliers of components of the AV.
Taking aside the question of legal liability and moral responsibility, the question arises how automated vehicles should be programmed to behave in an emergency situation where either passengers or other traffic participants like pedestrians, bicyclists and other drivers are endangered. A moral dilemma that a software engineer or car manufacturer might face in programming the operating software is described in an ethical thought experiment, the trolley problem: a conductor of a trolley has the choice of staying on the planned track and running over five people, or turn the trolley onto a track where it would kill only one person, assuming there is no traffic on it. When a self-driving car is in following scenario: it's driving with passengers and suddenly a person appears in its way. The car has to decide between the two options, either to run the person over or to avoid hitting the person by swerving into a wall, killing the passengers. There are two main considerations that need to be addressed. First, what moral basis would be used by an automated vehicle to make decisions? Second, how could those be translated into software code? Researchers have suggested, in particular, two ethical theories to be applicable to the behavior of automated vehicles in cases of emergency: deontology and utilitarianism. Asimov's Three Laws of Robotics are a typical example of deontological ethics. The theory suggests that an automated car needs to follow strict written-out rules that it needs to follow in any situation. Utilitarianism suggests the idea that any decision must be made based on the goal to maximize utility. This needs a definition of utility which could be maximizing the number of people surviving in a crash. Critics suggest that automated vehicles should adapt a mix of multiple theories to be able to respond morally right in the instance of a crash. Recently, some specific ethical frameworks i.e., utilitarianism, deontology, relativism, absolutism (monism), and pluralism, are investigated empirically with respect to the acceptance of self-driving cars in unavoidable accidents.
Many 'trolley' discussions skip over the practical problems of how a probabilistic machine learning vehicle AI could be sophisticated enough to understand that a deep problem of moral philosophy is presenting itself from instant to instant while using a dynamic projection into the near future, what sort of moral problem it actually would be if any, what the relevant weightings in human value terms should be given to all the other humans involved who will be probably unreliably identified, and how reliably it can assess the probable outcomes. These practical difficulties, and those around testing and assessment of solutions to them, may present as much of a challenge as the theoretical abstractions.
While most trolley conundrums involve hyperbolic and unlikely fact patterns, it is inevitable mundane ethical decisions and risk calculations such as the precise millisecond a car should yield to a yellow light or how closely to drive to a bike lane will need to be programmed into the software of autonomous vehicles. Mundane ethical situations may even be more relevant than rare fatal circumstances because of the specificity implicated and their large scope. Mundane situations involving drivers and pedestrians are so prevalent that, in the aggregate, produce large amounts of injuries and deaths. Hence, even incremental permutations of moral algorithms can have a notable effect when considered in their entirety.
Privacy-related issues arise mainly from the interconnectivity of automated cars, making it just another mobile device that can gather any information about an individual (see data mining). This information gathering ranges from tracking of the routes taken, voice recording, video recording, preferences in media that is consumed in the car, behavioural patterns, to many more streams of information. The data and communications infrastructure needed to support these vehicles may also be capable of surveillance, especially if coupled to other data sets and advanced analytics.
The implementation of automated vehicles to the mass market might cost up to 5 million jobs in the US alone, making up almost 3% of the workforce. Those jobs include drivers of taxis, buses, vans, trucks, and e-hailing vehicles. Many industries, such as the auto insurance industry are indirectly affected. This industry alone generates an annual revenue of about US$220 billion, supporting 277,000 jobs. To put this into perspective–this is about the number of mechanical engineering jobs. The potential loss of a majority of those jobs will have a tremendous impact on those individuals involved.
The Massachusetts Institute of Technology (MIT) has animated the trolley problem in the context of autonomous cars in a website called The Moral Machine. The Moral Machine generates random scenarios in which autonomous cars malfunction and forces the user to choose between two harmful courses of action. MIT's Moral Machine experiment has collected data involving over 40 million decisions from people in 233 countries to ascertain peoples' moral preferences. The MIT study illuminates that ethical preferences vary among cultures and demographics and likely correlate with modern institutions and geographic traits.
Global trends of the MIT study highlight that, overall, people prefer to save the lives of humans over other animals, prioritize the lives of many rather than few, and spare the lives of young rather than old. Men are slightly more likely to spare the lives of women, and religious affiliates are slightly more likely to prioritize human life. The lives of criminals were prioritized more than cats, but the lives of dogs were prioritized more than the lives of criminals. The lives of homeless were spared more than the elderly, but the lives of homeless were spared less often than the obese.
People overwhelmingly express a preference for autonomous vehicles to be programmed with utilitarian ideas, that is, in a manner that generates the least harm and minimizes driving casualties. While people want others to purchase utilitarian promoting vehicles, they themselves prefer to ride in vehicles that prioritize the lives of people inside the vehicle at all costs. This presents a paradox in which people prefer that others drive utilitarian vehicles designed to maximize the lives preserved in a fatal situation but want to ride in cars that prioritize the safety of passengers at all costs. People disapprove of regulations that promote utilitarian views and would be less willing to purchase a self-driving car that may opt to promote the greatest good at the expense of its passengers.
Bonnefon et al. conclude that the regulation of autonomous vehicle ethical prescriptions may be counterproductive to societal safety. This is because, if the government mandates utilitarian ethics and people prefer to ride in self-protective cars, it could prevent the large scale implementation of self-driving cars. Delaying the adoption of autonomous cars vitiates the safety of society as a whole because this technology is projected to save so many lives. This is a paradigmatic example of the tragedy of the commons, in which rational actors cater to their self-interested preferences at the expense of societal utility.
The testing of vehicles with varying degrees of automation can be carried out either physically, in a closed environment or, where permitted, on public roads (typically requiring a license or permit, or adhering to a specific set of operating principles), or in a virtual environment, i.e. using computer simulations. When driven on public roads, automated vehicles require a person to monitor their proper operation and "take over" when needed. For example, New York state has strict requirements for the test driver, such that the vehicle can be corrected at all times by a licensed operator; highlighted by Cardian Cube Company's application and discussions with New York State officials and the NYS DMV.
Apple is testing self-driving cars, and has increased its fleet of test vehicles from three in April 2017, to 27 in January 2018, and 45 by March 2018.
Russian internet-company Yandex started to develop self-driving cars in early 2017. The first driverless prototype was launched in May 2017. In November 2017, Yandex released a video of its AV winter tests. The car drove successfully along snowy roads of Moscow. In June 2018, Yandex self-driving vehicle completed a 485-mile (780 km) trip on a federal highway from Moscow to Kazan in autonomous mode. In August 2018, Yandex launched a Europe's first robotaxi service with no human driver behind the wheel in the Russian town of Innopolis. At the beginning of 2020 it was reported that over 5,000 autonomous passenger rides were made in the city. At the end of 2018, Yandex obtained a license to operate autonomous vehicles on public roads in the U.S. state of Nevada. In 2019 and 2020, Yandex cars carried out demo rides for Consumer Electronic Show visitors in Las Vegas. Yandex cars were circulating the streets of the city without any human control. In 2019 Yandex started testing its self-driving cars on the public roads of Israel. In October 2019, Yandex became one of the companies selected by Michigan Department of Transportation (MDOT) to provide autonomous passenger rides to the visitors of Detroit Autoshow 2020. At the end of 2019, Yandex made an announcement its self-driving cars passed 1 million miles in fully autonomous mode in Russia, Israel, and the United States. In February 2020, Yandex doubled its mileage with 2 million miles passed. In 2020, Yandex started to test its self-driving cars in Michigan.
The progress of automated vehicles can be assessed by computing the average distance driven between "disengagements", when the automated system is switched off, typically by the intervention of a human driver. In 2017, Waymo reported 63 disengagements over 352,545 mi (567,366 km) of testing, an average distance of 5,596 mi (9,006 km) between disengagements, the highest among companies reporting such figures. Waymo also travelled a greater total distance than any of the other companies. Their 2017 rate of 0.18 disengagements per 1,000 mi (1,600 km) was an improvement over the 0.2 disengagements per 1,000 mi (1,600 km) in 2016, and 0.8 in 2015. In March 2017, Uber reported an average of just 0.67 mi (1.08 km) per disengagement. In the final three months of 2017, Cruise (now owned by GM) averaged 5,224 mi (8,407 km) per disengagement over a total distance of 62,689 mi (100,888 km). In July 2018, the first electric driverless racing car, "Robocar", completed a 1.8-kilometer track, using its navigation system and artificial intelligence.
|Car maker||California, 2016||California, 2018||California, 2019|
|Total distance traveled||Distance between
|Total distance traveled||Distance between
|Total distance traveled|
|Waymo||5,128 mi (8,253 km)||635,868 mi (1,023,330 km)||11,154 mi (17,951 km)||1,271,587 mi (2,046,421 km)||11,017 mi (17,730 km)||1,450,000 mi (2,330,000 km)|
|BMW||638 mi (1,027 km)||638 mi (1,027 km)|
|Nissan||263 mi (423 km)||6,056 mi (9,746 km)||210 mi (340 km)||5,473 mi (8,808 km)|
|Ford||197 mi (317 km)||590 mi (950 km)|
|General Motors||55 mi (89 km)||8,156 mi (13,126 km)||5,205 mi (8,377 km)||447,621 mi (720,376 km)||12,221 mi (19,668 km)||831,040 mi (1,337,430 km)|
|Aptiv||15 mi (24 km)||2,658 mi (4,278 km)|
|Tesla||3 mi (4.8 km)||550 mi (890 km)|
|Mercedes-Benz||2 mi (3.2 km)||673 mi (1,083 km)||1.5 mi (2.4 km)||1,749 mi (2,815 km)|
|Bosch||7 mi (11 km)||983 mi (1,582 km)|
|Zoox||1,923 mi (3,095 km)||30,764 mi (49,510 km)||1,595 mi (2,567 km)||67,015 mi (107,850 km)|
|Nuro||1,028 mi (1,654 km)||24,680 mi (39,720 km)||2,022 mi (3,254 km)||68,762 mi (110,662 km)|
|Pony.ai||1,022 mi (1,645 km)||16,356 mi (26,322 km)||6,476 mi (10,422 km)||174,845 mi (281,386 km)|
|Baidu (Apolong)||206 mi (332 km)||18,093 mi (29,118 km)||18,050 mi (29,050 km)||108,300 mi (174,300 km)|
|Aurora||100 mi (160 km)||32,858 mi (52,880 km)||280 mi (450 km)||39,729 mi (63,938 km)|
|Apple||1.1 mi (1.8 km)||79,745 mi (128,337 km)||118 mi (190 km)||7,544 mi (12,141 km)|
|Uber||0.4 mi (0.64 km)||26,899 mi (43,290 km)||0 mi (0 km)|
In October 2021, L3Pilot, Europe's first comprehensive pilot test of automated driving on public roads demonstrated automated systems for cars in Hamburg, Germany, in conjunction with ITS World Congress 2021. SAE Level 3 and 4 functions were tested on ordinary roads.
Main article: Autonomous truck
Further information: Online food ordering
Companies such as Otto and Starsky Robotics have focused on autonomous trucks. Automation of trucks is important, not only due to the improved safety aspects of these very heavy vehicles, but also due to the ability of fuel savings through platooning. Autonomous vans are being used by online grocers such as Ocado.
Research has also indicated that goods distribution on the macro (urban distribution) and micro level (last mile delivery) could be made more efficient with the use of autonomous vehicles  thanks to the possibility of smaller vehicle sizes.
China trailed the first automated public bus in Henan province in 2015, on a highway linking Zhengzhou and Kaifeng. Baidu and King Long produce automated minibus, a vehicle with 14 seats, but without driving seat. With 100 vehicles produced, 2018 will be the first year with commercial automated service in China.
In Europe, cities in Belgium, France, Italy and the UK are planning to operate transport systems for automated cars, and Germany, the Netherlands, and Spain have allowed public testing in traffic. In 2015, the UK launched public trials of the LUTZ Pathfinder automated pod in Milton Keynes. Beginning in summer 2015, the French government allowed PSA Peugeot-Citroen to make trials in real conditions in the Paris area. The experiments were planned to be extended to other cities such as Bordeaux and Strasbourg by 2016. The alliance between French companies THALES and Valeo (provider of the first self-parking car system that equips Audi and Mercedes premi) is testing its own system. New Zealand is planning to use automated vehicles for public transport in Tauranga and Christchurch.
See also: Tesla Autopilot § Fatal crashes
In mid‑October 2015, Tesla Motors rolled out version 7 of their software in the US that included Tesla Autopilot capability. On 9 January 2016, Tesla rolled out version 7.1 as an over-the-air update, adding a new "summon" feature that allows cars to retrieve or self-park at parking locations without the driver in the car. As of November 2020[update], Tesla's automated driving features is currently classified as a Level 2 driver assistance system according to the Society of Automotive Engineers' (SAE) five levels of vehicle automation. At this level the car can be automated but requires the full attention of the driver, who must be prepared to take control at a moment's notice; Autopilot will sometimes fail to detect lane markings and disengage itself while alerting the driver.
On 20 January 2016, the first of five known fatal crashes of a Tesla with Autopilot occurred in China's Hubei province. According to China's 163.com news channel, this marked "China's first accidental death due to Tesla's automatic driving (system)". Initially, Tesla pointed out that the vehicle was so badly damaged from the impact that their recorder was not able to conclusively prove that the car had been on Autopilot at the time; however, 163.com pointed out that other factors, such as the car's absolute failure to take any evasive actions prior to the high speed crash, and the driver's otherwise good driving record, seemed to indicate a strong likelihood that the car was on Autopilot at the time. A similar fatal crash occurred four months later in Florida. In 2018, in a subsequent civil suit between the father of the driver killed and Tesla, Tesla did not deny that the car had been on Autopilot at the time of the accident, and sent evidence to the victim's father documenting that fact.
The second known fatal accident involving a vehicle being driven by itself took place in Williston, Florida on 7 May 2016 while a Tesla Model S electric car was engaged in Autopilot mode. The occupant was killed in a crash with an 18-wheel tractor-trailer. On 28 June 2016 the US National Highway Traffic Safety Administration (NHTSA) opened a formal investigation into the accident working with the Florida Highway Patrol. According to NHTSA, preliminary reports indicate the crash occurred when the tractor-trailer made a left turn in front of the Tesla at an intersection on a non-controlled access highway, and the car failed to apply the brakes. The car continued to travel after passing under the truck's trailer. NHTSA's preliminary evaluation was opened to examine the design and performance of any automated driving systems in use at the time of the crash, which involved a population of an estimated 25,000 Model S cars. On 8 July 2016, NHTSA requested Tesla Motors provide the agency detailed information about the design, operation and testing of its Autopilot technology. The agency also requested details of all design changes and updates to Autopilot since its introduction, and Tesla's planned updates schedule for the next four months.
According to Tesla, "neither autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." The car attempted to drive full speed under the trailer, "with the bottom of the trailer impacting the windshield of the Model S". Tesla also claimed that this was Tesla's first known autopilot death in over 130 million miles (210 million kilometers) driven by its customers with Autopilot engaged, however by this statement, Tesla was apparently refusing to acknowledge claims that the January 2016 fatality in Hubei China had also been the result of an autopilot system error. According to Tesla there is a fatality every 94 million miles (151 million kilometers) among all type of vehicles in the US However, this number also includes fatalities of the crashes, for instance, of motorcycle drivers with pedestrians.
In July 2016, the US National Transportation Safety Board (NTSB) opened a formal investigation into the fatal accident while the Autopilot was engaged. The NTSB is an investigative body that has the power to make only policy recommendations. An agency spokesman said "It's worth taking a look and seeing what we can learn from that event, so that as that automation is more widely introduced we can do it in the safest way possible." In January 2017, the NTSB released the report that concluded Tesla was not at fault; the investigation revealed that for Tesla cars, the crash rate dropped by 40 percent after Autopilot was installed.
In 2021, NTSB Chair called on Tesla to change the design of its Autopilot to ensure it cannot be misused by drivers, according to a letter sent to the company's CEO.
Waymo originated as a self-driving car project within Google. In August 2012, Google announced that their vehicles had completed over 300,000 automated-driving miles (500,000 km) accident-free, typically involving about a dozen cars on the road at any given time, and that they were starting to test with single drivers instead of in pairs. In late-May 2014, Google revealed a new prototype that had no steering wheel, gas pedal, or brake pedal, and was fully automated . As of March 2016[update], Google had test-driven their fleet in automated mode a total of 1,500,000 mi (2,400,000 km). In December 2016, Google Corporation announced that its technology would be spun off to a new company called Waymo, with both Google and Waymo becoming subsidiaries of a new parent company called Alphabet.
According to Google's accident reports as of early 2016, their test cars had been involved in 14 collisions, of which other drivers were at fault 13 times, although in 2016 the car's software caused a crash.
In June 2015, Brin confirmed that 12 vehicles had suffered collisions as of that date. Eight involved rear-end collisions at a stop sign or traffic light, two in which the vehicle was side-swiped by another driver, one in which another driver rolled through a stop sign, and one where a Google employee was controlling the car manually. In July 2015, three Google employees suffered minor injuries when their vehicle was rear-ended by a car whose driver failed to brake at a traffic light. This was the first time that a collision resulted in injuries. On 14 February 2016 a Google vehicle attempted to avoid sandbags blocking its path. During the maneuver it struck a bus. Google stated, "In this case, we clearly bear some responsibility, because if our car hadn't moved, there wouldn't have been a collision." Google characterized the crash as a misunderstanding and a learning experience. No injuries were reported in the crash.
See also: Uber § Former operations
In March 2017, an Uber Advanced Technologies Group's test vehicle was involved in a crash in Tempe, Arizona when another car failed to yield, flipping the Uber vehicle. There were no injuries in the accident. By 22 December 2017, Uber had completed 2 million miles (3.2 million kilometers) in automated mode.
In March 2018, Elaine Herzberg became the first pedestrian to be killed by a self-driving car in the United States after being hit by an Uber vehicle, also in Tempe. Herzberg was crossing outside of a crosswalk, approximately 400 feet from an intersection. This marks the first time an individual is known to have been killed by an autonomous vehicle, and considered to raise questions about regulations surrounding the burgeoning self-driving car industry. Some experts say a human driver could have avoided the fatal crash. Arizona Governor Doug Ducey later suspended the company's ability to test and operate its automated cars on public roadways citing an "unquestionable failure" of the expectation that Uber make public safety its top priority. Uber has pulled out of all self-driving-car testing in California as a result of the accident. On 24 May 2018, the US National Transport Safety Board issued a preliminary report.
In September 2020, according to the BBC, the backup driver has been charged of negligent homicide, because she did not look to the road for several seconds while her television was streaming The Voice broadcast by Hulu. Uber does not face any criminal charge because in the USA there is no basis for criminal liability for the corporation. The driver is assumed to be responsible of the accident, because she was in the driver seat in capacity to avoid an accident (like in a Level 3). Trial is planned for February 2021.[needs update]
On 9 November 2017, a Navya Arma automated self-driving bus with passengers was involved in a crash with a truck. The truck was found to be at fault of the crash, reversing into the stationary automated bus. The automated bus did not take evasive actions or apply defensive driving techniques such as flashing its headlights, or sounding the horn. As one passenger commented, "The shuttle didn't have the ability to move back. The shuttle just stayed still."
On 26 August 2021, a Toyota e-Palette, a mobility vehicle used to support mobility within the Athletes' Village at the Olympic and Paralympic Games Tokyo 2020, collided with a visually impaired pedestrian about to cross a pedestrian crossing. The suspension was made after the accident, and restarted on 31 with improved safety measures.
In a 2011 online survey of 2,006 US and UK consumers by Accenture, 49% said they would be comfortable using a "driverless car".
A 2012 survey of 17,400 vehicle owners by J.D. Power and Associates found 37% initially said they would be interested in purchasing a "fully autonomous car". However, that figure dropped to 20% if told the technology would cost US$3,000 more.
In a 2012 survey of about 1,000 German drivers by automotive researcher Puls, 22% of the respondents had a positive attitude towards these cars, 10% were undecided, 44% were sceptical and 24% were hostile.
A 2013 survey of 1,500 consumers across 10 countries by Cisco Systems found 57% "stated they would be likely to ride in a car controlled entirely by technology that does not require a human driver", with Brazil, India and China the most willing to trust automated technology.
In a 2014 US telephone survey by Insurance.com, over three-quarters of licensed drivers said they would at least consider buying a self-driving car, rising to 86% if car insurance were cheaper. 31.7% said they would not continue to drive once an automated car was available instead.
In a February 2015 survey of top auto journalists, 46% predicted that either Tesla or Daimler would be the first to the market with a fully autonomous vehicle, while (at 38%) Daimler was predicted to be the most functional, safe, and in-demand autonomous vehicle.
In 2015 a questionnaire survey by Delft University of Technology explored the opinion of 5,000 people from 109 countries on automated driving. Results showed that respondents, on average, found manual driving the most enjoyable mode of driving. 22% of the respondents did not want to spend any money for a fully automated driving system. Respondents were found to be most concerned about software hacking/misuse, and were also concerned about legal issues and safety. Finally, respondents from more developed countries (in terms of lower accident statistics, higher education, and higher income) were less comfortable with their vehicle transmitting data. The survey also gave results on potential consumer opinion on interest of purchasing an automated car, stating that 37% of surveyed current owners were either "definitely" or "probably" interested in purchasing an automated car.
In 2016, a survey in Germany examined the opinion of 1,603 people, who were representative in terms of age, gender, and education for the German population, towards partially, highly, and fully automated cars. Results showed that men and women differ in their willingness to use them. Men felt less anxiety and more joy towards automated cars, whereas women showed the exact opposite. The gender difference towards anxiety was especially pronounced between young men and women but decreased with participants' age.
In 2016, a PwC survey, in the United States, showing the opinion of 1,584 people, highlights that "66 percent of respondents said they think autonomous cars are probably smarter than the average human driver". People are still worried about safety and mostly the fact of having the car hacked. Nevertheless, only 13% of the interviewees see no advantages in this new kind of cars.
In 2017, Pew Research Center surveyed 4,135 US adults from 1–15 May and found that many Americans anticipate significant impacts from various automation technologies in the course of their lifetimes—from the widespread adoption of automated vehicles to the replacement of entire job categories with robot workers.
In 2019, results from two opinion surveys of 54 and 187 US adults respectively were published. A new standardised questionnaire, the autonomous vehicle acceptance model (AVAM) was developed, including additional description to help respondents better understand the implications of different automation levels. Results showed that users were less accepting of high autonomy levels and displayed significantly lower intention to use highly autonomous vehicles. Additionally, partial autonomy (regardless of level) was perceived as requiring uniformly higher driver engagement (usage of hands, feet and eyes) than full autonomy.
See also: Regulation of algorithms
The Geneva Convention on Road Traffic subscribed to by over 101 countries worldwide, requires the driver to be 18 years old.
The 1968 Vienna Convention on Road Traffic, subscribed to by 83 countries worldwide, establishes principles to govern traffic laws. One of the fundamental principles of the convention had been the concept that a driver is always fully in control and responsible for the behavior of a vehicle in traffic. In 2016, a reform of the convention has opened possibilities for automated features for ratified countries.
In February 2018, UNECE's Inland Transport Committee (ITC) acknowledged the importance of WP.29 activities related to automated, autonomous and connected vehicles and requested WP.29 to consider establishing a dedicated working Party. Following the request, WP.29, at its June 2018 session, decided to convert the Working Party on Brakes and Running Gear (GRRF) into a new Working Party on Automated/Autonomous and Connected Vehicles (GRVA).
In June 2020, WP.29 virtual meeting approved reports from GRVA about its fifth session on "automated/autonomous and connected vehicles" and sixth session on "cyber security and software updates", it means that UN regulation on Level 3 was established.
In first half 2022, UNECE regulation 157 should enter into force in some countries on 22 January 2022 for cars. In second half 2022, Article 1 and new Article 34 bis amendment of the 1968 Convention on Road Traffic should enter into force on 14 July 2022, unless it is rejected before 13 january 2022.
Japan is a non-signatory country to the Vienna Convention. In 2019, Japan amended two laws, "Road Traffic Act" and "Road Transport Vehicle Act", and they came into effect in April 2020. In the former act, Level 3 self driving cars became allowed on public roads. In the latter act, process to designate types for safety certification on Level 3 self driving function of Autonomous Driving System (ADS) and the certification process for the asserted type were legally defined. Through the amendment process, the achievements from the national project "SIP-adus" led by Cabinet Office since 2014 were fully considered and accepted.
In 2020, the next stage national level roadmap plan was officially issued which had considered social deployment and acceptability of Level 4. At the end of 2020, Ministry of Land, Infrastructure, Transport and Tourism (MLIT) amended its "Safety Regulation for Road Transport Vehicle" to reflect undertakings of UNECE WP.29 GRVA on cyber security and software updates, and the regulation came into effect in January 2021.
In April 2021, National Police Agency (NPA) published its expert committee's report of FY 2020 on summary of issues in research to realize Level 4 mobility services, including required legal amendment issues. During the summer of 2021, Ministry of Economy, Trade and Industry (METI) prepared with MLIT to launch a project "RoAD to the L4" to cover R&D with social deployment to realize acceptable Level 4 mobility service, and updated its public information in September. As a part of this project, civil law liability problem reflecting changed roles will be clarified.
About misleading representation in marketing, Article 5 of "Act against Unjustifiable Premiums and Misleading Representations" is applied.
In 2022, NPA is going to submit amendment bill on "Road Traffic Act" to the Diet in the next ordinary Diet session to include approving scheme for Level 4 services.
In the United States, a non-signatory country to the Vienna Convention, state vehicle codes generally do not envisage—but do not necessarily prohibit—highly automated vehicles as of 2012[update]. To clarify the legal status of and otherwise regulate such vehicles, several states have enacted or are considering specific laws. By 2016, seven states (Nevada, California, Florida, Michigan, Hawaii, Washington, and Tennessee), along with the District of Columbia, have enacted laws for automated vehicles. Incidents such as the first fatal accident by Tesla's Autopilot system have led to discussion about revising laws and standards for automated cars.
In September 2016, the US National Economic Council and US Department of Transportation (USDOT) released the Federal Automated Vehicles Policy, which are standards that describe how automated vehicles should react if their technology fails, how to protect passenger privacy, and how riders should be protected in the event of an accident. The new federal guidelines are meant to avoid a patchwork of state laws, while avoiding being so overbearing as to stifle innovation. Since then, USDOT has released multiple updates:
The National Highway Traffic Safety Administration released for public comment the Occupant Protection for Automated Driving System on 30 March 2020, followed by the Framework for Automated Driving System Safety on 3 December 2020. Occupant Protection is intended to modernize the Federal Motor Vehicle Safety Standards considering the removal of manual controls with automated driving systems, while the Framework document is intended to provide an objective way to define and assess automated driving system competence to ensure motor vehicle safety while also remaining flexible to accommodate the development of features to improve safety.
In June 2011, the Nevada Legislature passed a law to authorize the use of automated cars. Nevada thus became the first jurisdiction in the world where automated vehicles might be legally operated on public roads. According to the law, the Nevada Department of Motor Vehicles is responsible for setting safety and performance standards and the agency is responsible for designating areas where automated cars may be tested. This legislation was supported by Google in an effort to legally conduct further testing of its Google driverless car. The Nevada law defines an automated vehicle to be "a motor vehicle that uses artificial intelligence, sensors and global positioning system coordinates to drive itself without the active intervention of a human operator". The law also acknowledges that the operator will not need to pay attention while the car is operating itself. Google had further lobbied for an exemption from a ban on distracted driving to permit occupants to send text messages while sitting behind the wheel, but this did not become law. Furthermore, Nevada's regulations require a person behind the wheel and one in the passenger's seat during tests.
In April 2012, Florida became the second state to allow the testing of automated cars on public roads.
California became the third state to allow automated car testing when Governor Jerry Brown signed SB 1298 into law in September 2012 at Google Headquarters in Mountain View.
On 19 February 2016, California Assembly Bill 2866 was introduced in California that would allow automated vehicles to operate on public roads, including those without a driver, steering wheel, accelerator pedal, or brake pedal. The bill states that the California Department of Motor Vehicles would need to comply with these regulations by 1 July 2018 for these rules to take effect. As of November 2016[update], this bill has yet to pass the house of origin. California published discussions on the proposed federal automated vehicles policy in October 2016.
In December 2016, the California Department of Motor Vehicles ordered Uber to remove its self-driving vehicles from the road in response to two red-light violations. Uber immediately blamed the violations on human-error, and has suspended the drivers.
In Washington, DC's district code:
"Autonomous vehicle" means a vehicle capable of navigating District roadways and interpreting traffic-control devices without a driver actively operating any of the vehicle's control systems. The term "autonomous vehicle" excludes a motor vehicle enabled with active safety systems or driver- assistance systems, including systems to provide electronic blind-spot assistance, crash avoidance, emergency braking, parking assistance, adaptive cruise control, lane-keep assistance, lane-departure warning, or traffic-jam and queuing assistance, unless the system alone or in combination with other systems enables the vehicle on which the technology is installed to drive without active control or monitoring by a human operator.
In the same district code, it is considered that:
An autonomous vehicle may operate on a public roadway; provided, that the vehicle:
- (1) Has a manual override feature that allows a driver to assume control of the autonomous vehicle at any time;
- (2) Has a driver seated in the control seat of the vehicle while in operation who is prepared to take control of the autonomous vehicle at any moment; and
- (3) Is capable of operating in compliance with the District's applicable traffic laws and motor vehicle laws and traffic control devices.
In December 2013, Michigan became the fourth state to allow testing of driverless cars on public roads. In July 2014, the city of Coeur d'Alene, Idaho adopted a robotics ordinance that includes provisions to allow for self-driving cars.
In 2013, the government of the United Kingdom permitted the testing of automated cars on public roads. Before this, all testing of robotic vehicles in the UK had been conducted on private property. In March 2019, the UK became a signatory country to the Vienna Convention.
As of 2021[update], the UK is working on a new law proposal to allow self-driving automated lane keeping systems (ALKS) up to 37 mph (or 60 km/h) after a mixed reaction of experts during the consultation launched in summer 2020. This system would be allowed to give back control to the driver when "unplanned events" such as road construction or inclement weather occurs. The Centre for Connected and Autonomous Vehicles (CCAV) has asked the Law Commission of England and Wales and the Scottish Law Commission to undertake a far-reaching review of the legal framework for "automated" vehicles, and their use as part of public transport networks and on-demand passenger services. The teams are developing policy and the final report is due in the final quarter of 2021.
About misleading representation in marketing, the Society of Motor Manufacturers and Traders (SMMT) published guiding principles as followings:
In 2014, the Government of France announced that testing of automated cars on public roads would be allowed in 2015. 2000 km of road would be opened through the national territory, especially in Bordeaux, in Isère, Île-de-France and Strasbourg. At the 2015 ITS World Congress, a conference dedicated to intelligent transport systems, the very first demonstration of automated vehicles on open road in France was carried out in Bordeaux in early October 2015.
In 2015, a preemptive lawsuit against various automobile companies such as GM, Ford, and Toyota accused them of "Hawking vehicles that are vulnerable to hackers who could hypothetically wrest control of essential functions such as brakes and steering."
In spring of 2015, the Federal Department of Environment, Transport, Energy and Communications in Switzerland (UVEK) allowed Swisscom to test a driverless Volkswagen Passat on the streets of Zurich.
As of April 2017, it is possible to conduct public road tests for development vehicles in Hungary, furthermore the construction of a closed test track, the ZalaZone test track, suitable for testing highly automated functions is also under way near the city of Zalaegerszeg.
Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles defines specific requirements relating to automated vehicles and fully automated vehicles. This law is applicable from 2022 and is based on uniform procedures and technical specifications for the systems and other items.
In July 2021 in Germany, the Federal Act Amending the Road Traffic Act and the Compulsory Insurance Act (Autonomous Driving Act) came into effect. The Act allows motor vehicles with autonomous driving capabilities, meaning vehicles that can perform driving tasks independently without a person driving, in specified operating areas on public roads. Provisions about autonomous driving in appropriate operating areas correspond to Level 4.
In 2016, the Singapore Land Transit Authority in partnership with UK automotive supplier Delphi Automotive, began launch preparations for a test run of a fleet of automated taxis for an on-demand automated cab service to take effect in 2017.
In 2017, the South Korean government stated that the lack of universal standards is preventing its own legislation from pushing new domestic rules. However, once the international standards are settled, South Korea's legislation will resemble the international standards.
In 2018, China introduced regulations to regulate autonomous cars, for conditional automation, high-level automation and full automation (L3, L4 and L5 SAE levels).
The rules lay out requirements that vehicles must first be tested in non-public zones, that road tests can only be on designated streets and that a qualified person must always sit in the driver’s position, ready to take over control.— Reuters
Chinese regulation gives Ministry of Industry and Information Technology (MIIT), the Ministry of Public Security (MPS) and Ministry of Transport (MOT) regulatory competence.
Chinese regulation mandates remote monitoring capability and capacity to record, analyze and remake the incident of the test vehicles.
the National Rules further require that the testing applicant should have the financial capability for personal injury and property damage during the testing.— Chinalawinsight.
Requirements for a test driver are at least a 3-years unblemished driving experience.
Automated vehicles are required capacity to automatically record and store information during the 90 seconds before accident or malfunction. Those data should be stored at least 3 years.
In 2021, China plans to add highways to the list of roads were provincial and city-level authorities can authorize automated cars.
In 2021, NIO manufactures cars with autonomous driving system with level similar to Tesla: NIO is working on a Level 2 and a Level 4 vehicle.
NIO has built up the NAD full stack autonomous driving capability including perception algorithms, localization, control strategy and platform software. NIO Aquila Super Sensing features 33 high-performance sensing units, including 11 8MP high-resolution cameras, 1 ultralong-range high-resolution LiDAR, 5 millimeter wave radars, 12 ultrasonic sensors, 2 high-precision positioning units, V2X and ADMS. Aquila can generate 8GB data per second. NIO Adam features 4 Nvidia Orin SoCs with a total computing power of 1,016 TOPS.
NIO is taking a Tesla-like approach when it comes to the autonomous driving capabilities.
It will deliver safety features enabled by their autonomous driving technology as standard features, but it will charge for autonomous driving features, which are going to be offered as a subscription.
Deliveries are expected to start in Q1 2022.— 
Australia also has some ongoing trials.
Vehicles with higher levels of automation are not yet commercially available in Australia, although trials of these vehicles are currently underway both here and overseas.— infrastruture.gov.au
Noting this uncertainty, Australia’s transport ministers have agreed to a phased reform program to enable Level 3 ‘conditionally automated’ vehicles to operate safely and legally on our roads by 2020.— infrastruture.gov.au
Main article: Self-driving car liability
Self-driving car liability is a developing area of law and policy that will determine who is liable when an automated car causes physical damage to persons, or breaks road rules. When automated cars shift the control of driving from humans to automated car technology the driver will need to consent to share operational responsibility which will require a legal framework. There may be a need for existing liability laws to evolve in order to fairly identify the parties responsible for damage and injury, and to address the potential for conflicts of interest between human occupants, system operator, insurers, and the public purse. Increases in the use of automated car technologies (e.g. advanced driver-assistance systems) may prompt incremental shifts in this responsibility for driving. It is claimed by proponents to have potential to affect the frequency of road accidents, although it is difficult to assess this claim in the absence of data from substantial actual use. If there was a dramatic improvement in safety, the operators may seek to project their liability for the remaining accidents onto others as part of their reward for the improvement. However, there is no obvious reason why they should escape liability if any such effects were found to be modest or nonexistent, since part of the purpose of such liability is to give an incentive to the party controlling something to do whatever is necessary to avoid it causing harm. Potential users may be reluctant to trust an operator if it seeks to pass its normal liability on to others.
In any case, a well-advised person who is not controlling a car at all (Level 5) would be understandably reluctant to accept liability for something out of their control. And when there is some degree of sharing control possible (Level 3 or 4), a well-advised person would be concerned that the vehicle might try to pass back control at the last seconds before an accident, to pass responsibility and liability back too, but in circumstances where the potential driver has no better prospects of avoiding the crash than the vehicle, since they have not necessarily been paying close attention, and if it is too hard for the very smart car it might be too hard for a human. Since operators, especially those familiar with trying to ignore existing legal obligations (under a motto like 'seek forgiveness, not permission'), such as Waymo or Uber, could be normally expected to try to avoid responsibility to the maximum degree possible, there is potential for attempt to let the operators evade being held liable for accidents while they are in control.
As higher levels of automation are commercially introduced (Level 3 and 4), the insurance industry may see a greater proportion of commercial and product liability lines while personal automobile insurance shrinks.
When it comes to the direction of fully autonomous car liability, torts cannot be ignored. In any car accident the issue of negligence usually arises. In the situation of autonomous cars, negligence would most likely fall on the manufacturer because it would be hard to pin a breach of duty of care on the user who isn't in control of the vehicle. The only time negligence was brought up in an autonomous car lawsuit, there was a settlement between the person struck by the autonomous vehicle and the manufacturer (General Motors). Next, product liability would most likely cause liability to fall on the manufacturer. For an accident to fall under product liability, there needs to be either a defect, failure to provide adequate warnings, or foreseeability by the manufacturer. Third, is strict liability which in this case is similar to product liability based on the design defect. Based on a Nevada Supreme Court ruling (Ford vs. Trejo) the plaintiff needs to prove failure of the manufacturer to pass the consumer expectation test. That is potentially how the three major torts could function when it comes to autonomous car liability.
Between manually driven vehicles (SAE Level 0) and fully autonomous vehicles (SAE Level 5), there are a variety of vehicle types that can be described to have some degree of automation. These are collectively known as semi-automated vehicles. As it could be a while before the technology and infrastructure are developed for full automation, it is likely that vehicles will have increasing levels of automation. These semi-automated vehicles could potentially harness many of the advantages of fully automated vehicles, while still keeping the driver in charge of the vehicle.
Tesla vehicles are equipped with hardware that Tesla claims will allow full self driving in the future. In October 2020 Tesla released a "beta" version of its "Full Self-Driving" software to a small group of testers in the United States; however, this "Full Self-Driving" corresponds to level 2 autonomy.
In December 2021, Mercedes-Benz became the world second manufacturer to receive legal approval for a Level 3. Their type approval was on UN-R157 for automated lane keeping, and it is the first case for the type, as Honda's type approval for Traffic Jam Pilot was on different type. Mercedes-Benz says that customers will be able to buy an S-Class with the Drive Pilot technology in the first half of 2022, enabling them to drive in conditionally automated mode at speeds of up to 60 km/h (37mph) in heavy traffic or congested situations on suitable stretches of motorway in Germany.
In 2017, BMW was expected to trial 7 Series as an automated car in public urban motorways of the United States, Germany and Israel before commercializing them in 2021. Although it was not realized, BMW is still preparing 7 Series to become the next manufacturer to reach Level 3 in the second half of 2022.
Although Audi had unveiled an A8 sedan with insisting Level 3 technology in 2017, regulatory hurdles have prevented it from realizing Level 3 in 2020.
In September 2021, Stellantis has presented its findings from a pilot program testing Level 3 autonomous vehicles on public Italian highways. Stellantis's Highway Chauffeur claims Level 3 capabilities, which was tested on the Maserati Ghibli and Fiat 500X prototypes.
In August 2021, Toyota operated potentially Level 4 service around the Tokyo 2020 Olympic Village.
In October 2021 at World Congress on Intelligent Transport Systems, Honda presented that they are already testing Level 4 technology on modified Legend Hybrid EX. At the end of the month, Honda explained that they are conducting verification project on Level 4 technology on a test course in Tochigi prefecture. Honda plans to test on public roads in early 2022.
((cite web)): CS1 maint: numeric names: authors list (link)
((cite journal)): CS1 maint: multiple names: authors list (link)
((cite journal)): CS1 maint: multiple names: authors list (link)
This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.