Big O notation is a mathematical notation that describes the approximate size of a function on a domain. Big O is a member of a family of notations invented by German mathematicians Paul Bachmann and Edmund Landau and expanded by others, collectively called Bachmann–Landau notation. The letter O was chosen by Bachmann to stand for Ordnung, meaning the order of approximation.
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. In analytic number theory, big O notation is often used to express bounds on the growth of an arithmetical function; one well-known example is the remainder term in the prime number theorem.In mathematical analysis, including calculus,Big O notation is used to bound the error when truncating a power series and to express the qualityof approximation of a real or complex valued functionby a simpler function.