Zero function in the context of Division by zero


Zero function in the context of Division by zero

Zero function Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Zero function in the context of "Division by zero"


⭐ Core Definition: Zero function

0 (zero) is a number representing an empty quantity. Adding (or subtracting) 0 to any number leaves that number unchanged; in mathematical terminology, 0 is the additive identity of the integers, rational numbers, real numbers, and complex numbers, as well as other algebraic structures. Multiplying any number by 0 results in 0, and consequently dividing by 0 is generally considered to be undefined in arithmetic.

As a numerical digit, 0 plays a crucial role in decimal notation: it indicates that the power of ten corresponding to the place containing a 0 does not contribute to the total. For example, "205" in decimal means two hundreds, no tens, and five ones. The same principle applies in place-value notations that uses a base other than ten, such as binary and hexadecimal. The modern use of 0 in this manner derives from Indian mathematics that was transmitted to Europe via medieval Islamic mathematicians and popularized by Fibonacci. It was independently used by the Maya.

↓ Menu
HINT:

In this Dossier

Zero function in the context of Church–Turing thesis

In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

  • In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections.
  • In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus.
  • Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers.

Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below).

View the full Wikipedia page for Church–Turing thesis
↑ Return to Menu