Existential risk from artificial general intelligence in the context of "Mountain gorilla"

Play Trivia Questions online!

or

Skip to study material about Existential risk from artificial general intelligence in the context of "Mountain gorilla"




⭐ Core Definition: Existential risk from artificial general intelligence

Existential risk from artificial intelligence, or AI x-risk, refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

One argument for the validity of this concern and the importance of this risk references how human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass human intelligence and become superintelligent, it might become uncontrollable. Just as the fate of the mountain gorilla depends on human goodwill, the fate of humanity could depend on the actions of a future machine superintelligence.

↓ Menu

In this Dossier

Existential risk from artificial general intelligence in the context of Human extinction

Human extinction or omnicide is the end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction).

Some of the many possible contributors to anthropogenic hazards are climate change, global nuclear annihilation, biological warfare, weapons of mass destruction, and ecological collapse. Other scenarios center on emerging technologies, such as advanced artificial intelligence, biotechnology, or self-replicating nanobots.

↑ Return to Menu