Taylor series in the context of Complex variable


Taylor series in the context of Complex variable

Taylor series Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Taylor series in the context of "Complex variable"


⭐ Core Definition: Taylor series

In mathematical analysis, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century.

The partial sum formed by the first n + 1 terms of a Taylor series is a polynomial of degree n that is called the nth Taylor polynomial of the function. Taylor polynomials are approximations of a function, which become generally more accurate as n increases. Taylor's theorem gives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function is convergent, its sum is the limit of the infinite sequence of the Taylor polynomials. A function may differ from the sum of its Taylor series, even if its Taylor series is convergent. A function is analytic at a point x if it is equal to the sum of its Taylor series in some open interval (or open disk in the complex plane) containing x. This implies that the function is analytic at every point of the interval (or disk).

↓ Menu
HINT:

In this Dossier

Taylor series in the context of Complex analysis

Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of a complex variable of complex numbers. It is helpful in many branches of mathematics, including real analysis, algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.

At first glance, complex analysis is the study of holomorphic functions that are the differentiable functions of a complex variable. By contrast with the real case, a holomorphic functions is always infinitely differentiable and equal to the sum of its Taylor series in some neighborhood of each point of its domain.This makes methods and results of complex analysis significantly different from that of real analysis. In particular, contrarily, with the real case, the domain of every holomorphic function can be uniquely extended to almost the whole complex plane. This implies that the study of real analytic functions needs often the power of complex analysis. This is, in particular, the case in analytic combinatorics.

View the full Wikipedia page for Complex analysis
↑ Return to Menu

Taylor series in the context of Analytic function

In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not generally hold for real analytic functions.

A function is analytic if and only if for every in its domain, its Taylor series about converges to the function in some neighborhood of . This is stronger than merely being infinitely differentiable at , and therefore having a well-defined Taylor series; the Fabius function provides an example of a function that is infinitely differentiable but not analytic.

View the full Wikipedia page for Analytic function
↑ Return to Menu

Taylor series in the context of Functions of a complex variable

Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of a complex variable of complex numbers. It is helpful in many branches of mathematics, including real analysis, algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.

At first glance, complex analysis is the study of holomorphic functions that are the differentiable functions of a complex variable. By contrast with the real case, a holomorphic function is always infinitely differentiable and equal to the sum of its Taylor series in some neighborhood of each point of its domain.This makes methods and results of complex analysis significantly different from that of real analysis. In particular, contrarily, with the real case, the domain of every holomorphic function can be uniquely extended to almost the whole complex plane. This implies that the study of real analytic functions needs often the power of complex analysis. This is, in particular, the case in analytic combinatorics.

View the full Wikipedia page for Functions of a complex variable
↑ Return to Menu

Taylor series in the context of Holomorphic functions

In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex derivative in a neighbourhood is a very strong condition: It implies that a holomorphic function is infinitely differentiable and locally equal to its own Taylor series (is analytic). Holomorphic functions are the central objects of study in complex analysis.

Though the term analytic function is often used interchangeably with "holomorphic function", the word "analytic" is defined in a broader sense to denote any function (real, complex, or of more general type) that can be written as a convergent power series in a neighbourhood of each point in its domain. That all holomorphic functions are complex analytic functions, and vice versa, is a major theorem in complex analysis.

View the full Wikipedia page for Holomorphic functions
↑ Return to Menu

Taylor series in the context of Power series

In mathematics, a power series (in one variable) is an infinite series of the formwhere represents the coefficient of the nth term and c is a constant called the center of the series. Power series are useful in mathematical analysis, where they arise as Taylor series of infinitely differentiable functions. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function.

In many situations, the center c is equal to zero, for instance for Maclaurin series. In such cases, the power series takes the simpler form

View the full Wikipedia page for Power series
↑ Return to Menu

Taylor series in the context of First difference

In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.

In linear recurrences, the nth term is equated to a linear function of the previous terms. A famous example is the recurrence for the Fibonacci numbers,where the order is two and the linear function merely adds the two previous terms. This example is a linear recurrence with constant coefficients, because the coefficients of the linear function (1 and 1) are constants that do not depend on For these recurrences, one can express the general term of the sequence as a closed-form expression of . As well, linear recurrences with polynomial coefficients depending on are also important, because many common elementary functions and special functions have a Taylor series whose coefficients satisfy such a recurrence relation (see holonomic function).

View the full Wikipedia page for First difference
↑ Return to Menu

Taylor series in the context of Brook Taylor

Brook Taylor FRS (18 August 1685 – 29 December 1731) was an English mathematician and barrister best known for several results in mathematical analysis. Taylor's most famous developments are Taylor's theorem and the Taylor series, essential in the infinitesimal approach of functions in specific points.

View the full Wikipedia page for Brook Taylor
↑ Return to Menu

Taylor series in the context of Colin Maclaurin

Colin Maclaurin, FRS (/məˈklɔːrən/; Scottish Gaelic: Cailean MacLabhruinn; February 1698 – 14 June 1746) was a Scottish mathematician who made important contributions to geometry and algebra. He is also known for being a child prodigy and holding the record for being the youngest professor. The Maclaurin series, a special case of the Taylor series, is named after him.

Owing to changes in orthography since that time (his name was originally rendered as M'Laurine), his surname is alternatively written MacLaurin.

View the full Wikipedia page for Colin Maclaurin
↑ Return to Menu

Taylor series in the context of Taylor's theorem

In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.

Taylor's theorem is named after Brook Taylor, who stated a version of it in 1715, although an earlier version of the result was already mentioned in 1671 by James Gregory.

View the full Wikipedia page for Taylor's theorem
↑ Return to Menu

Taylor series in the context of Multipole expansion

A multipole expansion is a mathematical series representing a function that depends on angles—usually the two angles used in the spherical coordinate system (the polar and azimuthal angles) for three-dimensional Euclidean space, . Multipole expansions are useful because, similar to Taylor series, oftentimes only the first few terms are needed to provide a good approximation of the original function. The function being expanded may be real- or complex-valued and is defined either on , or less often on for some other .

Multipole expansions are used frequently in the study of electromagnetic and gravitational fields, where the fields at distant points are given in terms of sources in a small region. The multipole expansion with angles is often combined with an expansion in radius. Such a combination gives an expansion describing a function throughout three-dimensional space.

View the full Wikipedia page for Multipole expansion
↑ Return to Menu

Taylor series in the context of Euler numbers

In mathematics, the Euler numbers are a sequence En of integers (sequence A122045 in the OEIS) defined by the Taylor series expansion

where is the hyperbolic cosine function. The Euler numbers are related to a special value of the Euler polynomials, namely

View the full Wikipedia page for Euler numbers
↑ Return to Menu

Taylor series in the context of Elementary function

In mathematics, an elementary function is a function of a single variable (real or complex) that is typically encountered by beginners. The basic elementary functions are polynomial functions, rational functions, the trigonometric functions, the exponential and logarithm functions, the n-th root, and the inverse trigonometric functions, as well as those functions obtained by addition, multiplication, division, and composition of these. Some functions which are encountered by beginners are not elementary, such as the absolute value function and piecewise-defined functions. More generally, in modern mathematics, elementary functions comprise the set of functions previously enumerated, all algebraic functions (not often encountered by beginners), and all functions obtained by roots of a polynomial whose coefficients are elementary.

This list of elementary functions was originally set forth by Joseph Liouville in 1833. A key property is that all elementary functions have derivatives of any order, which are also elementary, and can be algorithmically computed by applying the differentiation rules (or the rules for implicit differentiation in the case of roots). The Taylor series of an elementary function converges in a neighborhood of every point of its domain. More generally, they are global analytic functions, defined (possibly with multiple values, such as the elementary function or ) for every complex argument, except at isolated points. In contrast, antiderivatives of elementary functions need not be elementary and is difficult to decide whether a specific elementary function has an elementary antiderivative.

View the full Wikipedia page for Elementary function
↑ Return to Menu

Taylor series in the context of Analytic functions

In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not generally hold for real analytic functions.

A function is analytic if and only if for every in its domain, its Taylor series about converges to the function in some neighborhood of . This is stronger than merely being infinitely differentiable at , and therefore having a well-defined Taylor series; the Fabius function is an example of a function that is infinitely differentiable but not analytic.

View the full Wikipedia page for Analytic functions
↑ Return to Menu

Taylor series in the context of Wald test

In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ-distribution under the null hypothesis, a fact that can be used to determine statistical significance.

Together with the Lagrange multiplier test and the likelihood-ratio test, the Wald test is one of three classical approaches to hypothesis testing. An advantage of the Wald test over the other two is that it only requires the estimation of the unrestricted model, which lowers the computational burden as compared to the likelihood-ratio test. However, a major disadvantage is that (in finite samples) it is not invariant to changes in the representation of the null hypothesis; in other words, algebraically equivalent expressions of non-linear parameter restriction can lead to different values of the test statistic. That is because the Wald statistic is derived from a Taylor expansion, and different ways of writing equivalent nonlinear expressions lead to nontrivial differences in the corresponding Taylor coefficients. Another aberration, known as the Hauck–Donner effect, can occur in binomial models when the estimated (unconstrained) parameter is close to the boundary of the parameter space—for instance a fitted probability being extremely close to zero or one—which results in the Wald test no longer monotonically increasing in the distance between the unconstrained and constrained parameter.

View the full Wikipedia page for Wald test
↑ Return to Menu