Autonomous vehicles in the context of "Artificial intelligence"

Play Trivia Questions online!

or

Skip to study material about Autonomous vehicles in the context of "Artificial intelligence"

Ad spacer

⭐ Core Definition: Autonomous vehicles

A self-driving car, also known as an autonomous car (AC), driverless car, robotic car or robo-car, is a car that is capable of operating with reduced or no human input. They are sometimes called robotaxis, though this term refers specifically to self-driving cars operated for a ridesharing company. Self-driving cars are responsible for all driving activities, such as perceiving the environment, monitoring important systems, and controlling the vehicle, which includes navigating from origin to destination.

As of late 2024, no system has achieved full autonomy (SAE Level 5). In December 2020, Waymo was the first to offer rides in self-driving taxis to the public in limited geographic areas (SAE Level 4), and as of AprilΒ 2024 offers services in Arizona (Phoenix) and California (San Francisco and Los Angeles). In June 2024, after a Waymo self-driving taxi crashed into a utility pole in Phoenix, Arizona, all 672 of its Jaguar I-Pace vehicles were recalled after they were found to have susceptibility to crashing into pole-like items and had their software updated. In July 2021, DeepRoute.ai started offering self-driving taxi rides in Shenzhen, China. Starting in February 2022, Cruise offered self-driving taxi service in San Francisco, but suspended service in 2023. In 2021, Honda was the first manufacturer to sell an SAE Level 3 car, followed by Mercedes-Benz in 2023.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<

πŸ‘‰ Autonomous vehicles in the context of Artificial intelligence

Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.

High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., language models and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."

↓ Explore More Topics
In this Dossier

Autonomous vehicles in the context of Remote operation

Teleoperation (or remote operation) indicates operation of a system or machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academia and technology. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance.

Teleoperation can be considered a human-machine system. For example, ArduPilot provides a spectrum of autonomy ranging from manual control to full autopilot for autonomous vehicles.

↑ Return to Menu

Autonomous vehicles in the context of Inference engine

In the field of artificial intelligence, an inference engine is a software component of an intelligent system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.

Additionally, the concept of 'inference' has expanded to include the process through which trained neural networks generate predictions or decisions. In this context, an 'inference engine' could refer to the specific part of the system, or even the hardware, that executes these operations. This type of inference plays a crucial role in various applications, including (but not limited to) image recognition, natural language processing, and autonomous vehicles. The inference phase in these applications is typically characterized by a high volume of data inputs and real-time processing requirements.

↑ Return to Menu