The advent of self-driving car technology has the potential to dramatically reshape transportation and mobility in the coming years. As this emerging field continues to advance, what practical innovations should we expect to see in the years ahead?Over the last decade, automakers like Tesla, GM and Volvo have introduced increasingly sophisticated driver assistance features like automated emergency braking, lane keeping assist, self-parking and advanced cruise control……Continue reading….
By: dparente
Source: Medium
.
Critics:
The Union of Concerned Scientists defined self-driving as “cars or trucks in which human drivers are never required to take control to safely operate the vehicle. Also known as autonomous or ‘driverless’ cars, they combine sensors and software to control, navigate, and drive the vehicle.” The British Automated and Electric Vehicles Act 2018 law defines a vehicle as “driving itself” if the vehicle is “not being controlled, and does not need to be monitored, by an individual”.
Another British government definition stated, “Self-driving vehicles are vehicles that can safely and lawfully drive themselves”. Operational design domain (ODD) is a term for a particular operating context for an automated system, often used in the field of autonomous vehicles. The context is defined by a set of conditions, including environmental, geographical, time of day, and other conditions. For vehicles, traffic and roadway characteristics are included.
Manufacturers use ODD to indicate where/how their product operates safely. A given system may operate differently according to the immediate ODD. The concept presumes that automated systems have limitations. Relating system function to the ODD it supports is important for developers and regulators to establish and communicate safe operating conditions. Systems should operate within those limitations. Some systems recognize the ODD and modify their behavior accordingly.
For example, an autonomous car might recognize that traffic is heavy and disable its automated lane change feature. Vendors have taken a variety of approaches to the self-driving problem. Tesla’s approach is to allow their “full self-driving” (FSD) system to be used in all ODDs as a Level 2 (hands/on, eyes/on) ADAS. Waymo picked specific ODDs (city streets in Phoenix and San Francisco) for their Level 5 robotaxi service.
Mercedes Benz offers Level 3 service in Las Vegas in highway traffic jams at speeds up to 40 miles per hour (64 km/h) Mobileye’s SuperVision system offers hands-off/eyes-on driving on all road types at speeds up to 130 kilometres per hour (81 mph). GM’s hands-free Super Cruise operates on specific roads in specific conditions, stopping or returning control to the driver when ODD changes. In 2024 the company announced plans to expand road coverage from 400,000 miles to 750,000 miles.
Ford’s BlueCruise hands-off system operates on 130,000 miles of US divided highways. The perception system processes visual and audio data from outside and inside the car to create a local model of the vehicle, the road, traffic, traffic controls and other observable objects, and their relative motion. The control system then takes actions to move the vehicle, considering the local model, road map, and driving regulations. Several classifications have been proposed to describe ADAS technology. One proposal is to adopt these categories: navigation, path planning, perception, and car control.
Navigation involves the use of maps to define a path between origin and destination. Hybrid navigation is the use of multiple navigation systems. Some systems use basic maps, relying on perception to deal with anomalies. Such a map understands which roads lead to which others, whether a road is a freeway, a highway, are one-way, etc. Other systems require highly detailed maps, including lane maps, obstacles, traffic controls, etc.
ACs need to be able to perceive the world around them. Supporting technologies include combinations of cameras, LiDAR, radar, audio, and ultrasound, GPS, and inertial measurement. Deep neural networks are used to analyse inputs from these sensors to detect and identify objects and their trajectories. Some systems use Bayesian simultaneous localization and mapping (SLAM) algorithms. Another technique is detection and tracking of other moving objects (DATMO), used to handle potential obstacles. Other systems use roadside .
real-time locating system (RTLS) technologies to aid localization. Tesla’s “vision only” system uses eight cameras, without LIDAR or radar, to create its bird’s-eye view of the environment. Path planning finds a sequence of segments that a vehicle can use to move from origin to destination. Techniques used for path planning include graph-based search and variational-based optimization techniques. Graph-based techniques can make harder decisions such as how to pass another vehicle/obstacle.
Variational-based optimization techniques require more stringent restrictions on the vehicle’s path to prevent collisions. The large scale path of the vehicle can be determined by using a voronoi diagram, an occupancy grid mapping, or a driving corridor algorithm. The latter allows the vehicle to locate and drive within open space that is bounded by lanes or barriers. Maps are necessary for navigation.
Map sophistication varies from simple graphs that show which roads connect to each other, with details such as one-way vs two-way, to those that are highly detailed, with information about lanes, traffic controls, roadworks, and more. Researchers at the MITComputer Science and Artificial Intelligence Laboratory (CSAIL) developed a system called MapLite, which allows self-driving cars to drive with simple maps.
The system combines the GPS position of the vehicle, a “sparse topological map” such as OpenStreetMap (which has only 2D road features), with sensors that observe road conditions. One issue with highly-detailed maps is updating them as the world changes. Vehicles that can operate with less-detailed maps do not require frequent updates or geo-fencing. Sensors are necessary for the vehicle to properly respond to the driving environment.
Sensor types include cameras, LiDAR, ultrasound, and radar. Control systems typically combine data from multiple sensors. Multiple sensors can provide a more complete view of the surroundings and can be used to cross-check each other to correct errors. For example, radar can image a scene in, e.g., a nighttime snowstorm, that defeats cameras and LiDAR, albeit at reduced precision.
After experimenting with radar and ultrasound, Tesla adopted a vision-only approach, asserting that humans drive using only vision, and that cars should be able to do the same, while citing the lower cost of cameras versus other sensor types. By contrast, Waymo makes use of the higher resolution of LiDAR sensors and cites the declining cost of that technology.
Leave a Reply