Turing machine in the context of Turing-complete


Turing machine in the context of Turing-complete

Turing machine Study page number 1 of 2

Play TriviaQuestions Online!

or

Skip to study material about Turing machine in the context of "Turing-complete"


⭐ Core Definition: Turing machine

A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.

The machine operates on an infinite memory tape divided into discrete cells, each of which can hold a single symbol drawn from a finite set of symbols called the alphabet of the machine. It has a "head" that, at any point in the machine's operation, is positioned over one of these cells, and a "state" selected from a finite set of states. At each step of its operation, the head reads the symbol in its cell. Then, based on the symbol and the machine's own present state, the machine writes a symbol into the same cell, and moves the head one step to the left or the right, or halts the computation. The choice of which replacement symbol to write, which direction to move the head, and whether to halt is based on a finite table that specifies what to do for each combination of the current state and the symbol that is read. As with a real computer program, it is possible for a Turing machine to go into an infinite loop which will never halt.

↓ Menu
HINT:

In this Dossier

Turing machine in the context of Theory of computation

In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation using an algorithm, how efficiently they can be solved and to what degree (e.g., approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".

In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation (see Church–Turing thesis). It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a finite amount of memory.

View the full Wikipedia page for Theory of computation
↑ Return to Menu

Turing machine in the context of Lambda calculus

In mathematical logic, the lambda calculus (also written as λ-calculus) is a formal system for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal machine, a model of computation that can be used to simulate any Turing machine (and vice versa). It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940.

View the full Wikipedia page for Lambda calculus
↑ Return to Menu

Turing machine in the context of Church–Turing thesis

In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

  • In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections.
  • In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus.
  • Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers.

Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below).

View the full Wikipedia page for Church–Turing thesis
↑ Return to Menu

Turing machine in the context of Halting problem

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.

A key part of the formal statement of the problem is a mathematical definition of a computer and program, usually via a Turing machine. The proof then shows, for any program f that might determine whether programs halt, that a "pathological" program g exists for which f makes an incorrect determination. Specifically, g is the program that, when called with some input, passes its own source and its input to f and does the opposite of what f predicts g will do. The behavior of f on g shows undecidability as it means no program f will solve the halting problem in every possible case.

View the full Wikipedia page for Halting problem
↑ Return to Menu

Turing machine in the context of Computable function

Computable functions are the basic objects of study in computability theory. Informally, a function is computable if there is an algorithm that computes the value of the function for every value of its argument. Because of the lack of a precise definition of the concept of algorithm, every formal definition of computability must refer to a specific model of computation.

Many such models of computation have been proposed, the major ones being Turing machines, register machines, lambda calculus and general recursive functions. Although these four are of a very different nature, they provide exactly the same class of computable functions, and, for every model of computation that has ever been proposed, the computable functions for such a model are computable for the above four models of computation.

View the full Wikipedia page for Computable function
↑ Return to Menu

Turing machine in the context of Alan Turing

Alan Mathison Turing (/ˈtjʊərɪŋ/; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science.

Born in London, Turing was raised in southern England. He graduated from King's College, Cambridge, and in 1938, earned a doctorate degree from Princeton University. During World War II, Turing worked for the Government Code and Cypher School at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. He led Hut 8, the section responsible for German naval cryptanalysis. Turing devised techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bomba method, an electromechanical machine that could find settings for the Enigma machine. He played a crucial role in cracking intercepted messages that enabled the Allies to defeat the Axis powers in the Battle of the Atlantic and other engagements.

View the full Wikipedia page for Alan Turing
↑ Return to Menu

Turing machine in the context of Post–Turing machine

A Post machine or Post–Turing machine is a "program formulation" of a type of Turing machine, comprising a variant of Emil Post's Turing-equivalent model of computation. Post's model and Turing's model, though very similar to one another, were developed independently. Turing's paper was received for publication in May 1936, followed by Post's in October. A Post–Turing machine uses a binary alphabet, an infinite sequence of binary storage locations, and a primitive programming language with instructions for bi-directional movement among the storage locations and alteration of their contents one at a time. The names "Post–Turing program" and "Post–Turing machine" were used by Martin Davis in 1973–1974 (Davis 1973, p. 69ff). Later in 1980, Davis used the name "Turing–Post program" (Davis, in Steen p. 241).

View the full Wikipedia page for Post–Turing machine
↑ Return to Menu

Turing machine in the context of CoNP

In computational complexity theory, co-NP is a complexity class. A decision problem X is a member of co-NP if and only if its complement X is in the complexity class NP. The class can be defined as follows: a decision problem is in co-NP if and only if for every no-instance we have a polynomial-length "certificate" and there is a polynomial-time algorithm that can be used to verify any purported certificate.

That is, co-NP is the set of decision problems where there exists a polynomial and a polynomial-time bounded Turing machine M such that for every instance x, x is a no-instance if and only if: for some possible certificate c of length bounded by , the Turing machine M accepts the pair (x, c).

View the full Wikipedia page for CoNP
↑ Return to Menu

Turing machine in the context of Quantum computing

A quantum computer is a (real or theoretical) computer that exploits superposed and entangled states. Quantum computers can be viewed as sampling from quantum systems that evolve in ways that may be described as operating on an enormous number of possibilities simultaneously, though still subject to strict computational constraints. By contrast, ordinary ("classical") computers operate according to deterministic rules. (A classical computer can, in principle, be replicated by a classical mechanical device, with only a simple multiple of time cost. On the other hand (it is believed), a quantum computer would require exponentially more time and energy to be simulated classically.) It is widely believed that a quantum computer could perform some calculations exponentially faster than any classical computer. For example, a large-scale quantum computer could break some widely used public-key cryptographic schemes and aid physicists in performing physical simulations. However, current hardware implementations of quantum computation are largely experimental and only suitable for specialized tasks.

The basic unit of information in quantum computing, the qubit (or "quantum bit"), serves the same function as the bit in ordinary or "classical" computing. However, unlike a classical bit, which can be in one of two states (a binary), a qubit can exist in a linear combination of two states known as a quantum superposition. The result of measuring a qubit is one of the two states given by a probabilistic rule. If a quantum computer manipulates the qubit in a particular way, wave interference effects amplify the probability of the desired measurement result. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform this amplification.

View the full Wikipedia page for Quantum computing
↑ Return to Menu

Turing machine in the context of Abstract machine

In computer science, an abstract machine is a theoretical model that allows for a detailed and precise analysis of how a computer system functions. It is similar to a mathematical function in that it receives inputs and produces outputs based on predefined rules. Abstract machines vary from literal machines in that they are expected to perform correctly and independently of hardware. Abstract machines are "machines" because they allow step-by-step execution of programs; they are "abstract" because they ignore many aspects of actual (hardware) machines. A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter. They can be used for purely theoretical reasons as well as models for real-world computer systems. In the theory of computation, abstract machines are often used in thought experiments regarding computability or to analyse the complexity of algorithms. This use of abstract machines is fundamental to the field of computational complexity theory, such as with finite state machines, Mealy machines, push-down automata, and Turing machines.

View the full Wikipedia page for Abstract machine
↑ Return to Menu

Turing machine in the context of Universal machine

In computer science, a universal Turing machine (UTM) is a Turing machine capable of computing any computable sequence, as described by Alan Turing in his seminal paper "On Computable Numbers, with an Application to the Entscheidungsproblem". Common sense might say that a universal machine is impossible, but Turing proves that it is possible. He suggested that we may compare a human in the process of computing a real number to a machine that is only capable of a finite number of conditions ; which will be called "m-configurations". He then described the operation of such machine, as described below, and argued:

Turing introduced the idea of such a machine in 1936–1937.

View the full Wikipedia page for Universal machine
↑ Return to Menu

Turing machine in the context of Linear bounded automaton

In computer science, a linear bounded automaton (plural linear bounded automata, abbreviated LBA) is a restricted form of Turing machine.

View the full Wikipedia page for Linear bounded automaton
↑ Return to Menu

Turing machine in the context of Μ-recursive function

In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function.

Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms.

View the full Wikipedia page for Μ-recursive function
↑ Return to Menu

Turing machine in the context of Hypercomputation

Hypercomputation or super-Turing computation is a set of hypothetical models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that could correctly evaluate every statement in Peano arithmetic.

The Church–Turing thesis states that any "computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense.

View the full Wikipedia page for Hypercomputation
↑ Return to Menu

Turing machine in the context of Lambda-recursive function

In mathematical logic, the lambda calculus (also written as λ-calculus) is a formal system for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal machine, i.e. a model of computation that can be used to simulate any Turing machine (and vice versa). It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940.

View the full Wikipedia page for Lambda-recursive function
↑ Return to Menu

Turing machine in the context of Register machine

In mathematical logic and theoretical computer science, a register machine is a generic class of abstract machines, analogous to a Turing machine and thus Turing complete. Unlike a Turing machine that uses a tape and head, a register machine utilizes multiple uniquely addressed registers to store non-negative integers. There are several sub-classes of register machines, including counter machines, pointer machines, random-access machines (RAM), and Random-Access Stored-Program Machine (RASP), each varying in complexity. These machines, particularly in theoretical studies, help in understanding computational processes. The concept of register machines can also be applied to virtual machines in practical computer science, for educational purposes and reducing dependency on specific hardware architectures.

View the full Wikipedia page for Register machine
↑ Return to Menu

Turing machine in the context of Termination proof

In computer science, termination analysis is program analysis which attempts to determine whether the evaluation of a given program halts for each input. This means to determine whether the input program computes a total function.

It is closely related to the halting problem, which is to determine whether a given program halts for a given input and which is undecidable. The termination analysis is even more difficult than the halting problem: the termination analysis in the model of Turing machines as the model of programs implementing computable functions would have the goal of deciding whether a given Turing machine is a total Turing machine, and this problem is at level of the arithmetical hierarchy and thus is strictly more difficult than the halting problem.

View the full Wikipedia page for Termination proof
↑ Return to Menu