Superintelligence in the context of AI


Superintelligence in the context of AI

Superintelligence Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Superintelligence in the context of "AI"


⭐ Core Definition: Superintelligence

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the most gifted human minds. Philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence. Several future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The hypothetical creation of the first superintelligence may or may not result from an intelligence explosion or a technological singularity.

↓ Menu
HINT:

In this Dossier

Superintelligence in the context of Artificial intelligence

Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.

High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., language models and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."

View the full Wikipedia page for Artificial intelligence
↑ Return to Menu

Superintelligence in the context of Brainiac (character)

Brainiac (Vril Dox) is a supervillain appearing in American comic books published by DC Comics. Created by writer Otto Binder and artist Al Plastino, Brainiac first appeared in Action Comics #242 (1958), and has since endured as one of Superman's greatest enemies.

Brainiac is commonly depicted as a superintelligent android or cyborg from the planet Colu who is obsessed with collecting all knowledge in the known universe. He travels the galaxy and shrinks cities to bottle size for preservation on his skull-shaped spaceship before destroying their source planets, believing the knowledge he acquires to be most valuable if he alone possesses it. Among these shrunken cities is Kandor, the capital of Superman's home planet Krypton, and Brainiac is even responsible for Krypton's destruction in some continuities. Regarded as one of the most dangerous threats in the DC Universe, Brainiac has come into repeated conflict with Superman and the Justice League. Although stories often end in Brainiac's apparent destruction, the character's artificial consciousness is resurrected in new physical forms, some robotic and others more organic-based in appearance.

View the full Wikipedia page for Brainiac (character)
↑ Return to Menu

Superintelligence in the context of Existential risk from artificial general intelligence

Existential risk from artificial intelligence, or AI x-risk, refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

One argument for the validity of this concern and the importance of this risk references how human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass human intelligence and become superintelligent, it might become uncontrollable. Just as the fate of the mountain gorilla depends on human goodwill, the fate of humanity could depend on the actions of a future machine superintelligence.

View the full Wikipedia page for Existential risk from artificial general intelligence
↑ Return to Menu

Superintelligence in the context of Technological singularity

The technological singularity, often simply called the singularity, is a hypothetical event in which technological growth accelerates beyond human control, producing unpredictable changes in human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing an explosive increase in intelligence that culminates in a powerful superintelligence, far surpassing human intelligence.

Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence could result in human extinction. The consequences of a technological singularity and its potential benefit or harm to the human species have been intensely debated.

View the full Wikipedia page for Technological singularity
↑ Return to Menu