Inner product in the context of Complete metric space


Inner product in the context of Complete metric space

Inner product Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Inner product in the context of "Complete metric space"


⭐ Core Definition: Inner product

In mathematics, an inner product space is a real or complex vector space endowed with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality (zero inner product) of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimensions are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

An inner product naturally induces an associated norm, (denoted and in the picture); so, every inner product space is a normed vector space. If this normed space is also complete (that is, a Banach space) then the inner product space is a Hilbert space. If an inner product space H is not a Hilbert space, it can be extended by completion to a Hilbert space This means that is a linear subspace of the inner product of is the restriction of that of and is dense in for the topology defined by the norm.

↓ Menu
HINT:

In this Dossier

Inner product in the context of Dot product

In mathematics, the dot product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the scalar product of two vectors is the dot product of their Cartesian coordinates, and is independent from the choice of a particular Cartesian coordinate system. The terms "dot product" and "scalar product" are often used interchangeably when a Cartesian coordinate system has been fixed once for all. The scalar product being a particular inner product, the term "inner product" is also often used.

Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, the scalar product of two vectors is the product of their lengths and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the scalar product is used for defining lengths (the length of a vector is the square root of the scalar product of the vector by itself) and angles (the cosine of the angle between two vectors is the quotient of their scalar product by the product of their lengths).

View the full Wikipedia page for Dot product
↑ Return to Menu

Inner product in the context of Hilbert space

In mathematics, a Hilbert space is a real or complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space, to infinite dimensions. The inner product, which is the analog of the dot product from vector calculus, allows lengths and angles to be defined. Furthermore, completeness means that there are enough limits in the space to allow the techniques of calculus to be used. A Hilbert space is a special case of a Banach space.

Hilbert spaces were studied beginning in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer), and ergodic theory (which forms the mathematical underpinning of thermodynamics). John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean vector spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions.

View the full Wikipedia page for Hilbert space
↑ Return to Menu

Inner product in the context of Riemannian metric

In differential geometry, a Riemannian manifold (or Riemann space) is a geometric space on which many geometric notions such as distance, angles, length, volume, and curvature are defined. Euclidean space, the -sphere, hyperbolic space, and smooth surfaces in three-dimensional space, such as ellipsoids and paraboloids, are all examples of Riemannian manifolds. Riemannian manifolds take their name from German mathematician Bernhard Riemann, who first conceptualized them in 1854.

Formally, a Riemannian metric (or just a metric) on a smooth manifold is a smoothly varying choice of inner product for each tangent space of the manifold. A Riemannian manifold is a smooth manifold together with a Riemannian metric. The techniques of differential and integral calculus are used to pull geometric data out of the Riemannian metric. For example, integration leads to the Riemannian distance function, whereas differentiation is used to define curvature and parallel transport.

View the full Wikipedia page for Riemannian metric
↑ Return to Menu

Inner product in the context of Scalar multiplication

In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra). In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction. Scalar multiplication is the multiplication of a vector by a scalar (where the product is a vector), and is to be distinguished from inner product of two vectors (where the product is a scalar).

View the full Wikipedia page for Scalar multiplication
↑ Return to Menu

Inner product in the context of Scalar (mathematics)

A scalar is an element of a field which is used to define a vector space.In linear algebra, real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers). Then scalars of that vector space will be elements of the associated field (such as complex numbers).

A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space.

View the full Wikipedia page for Scalar (mathematics)
↑ Return to Menu

Inner product in the context of Magnitude (vector)

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and zero is only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

A seminorm satisfies the first two properties of a norm but may be zero for vectors other than the origin. A vector space with a specified norm is called a normed vector space. In a similar manner, a vector space with a seminorm is called a seminormed vector space.

View the full Wikipedia page for Magnitude (vector)
↑ Return to Menu

Inner product in the context of Riemannian geometry

Riemannian geometry is the branch of differential geometry that studies Riemannian manifolds. An example of a Riemannian manifold is a surface, on which distances are measured by the length of curves on the surface. Riemannian geometry is the study of surfaces and their higher-dimensional analogs (called manifolds), in which distances are calculated along curves belonging to the manifold. Formally, Riemannian geometry is the study of smooth manifolds with a Riemannian metric (an inner product on the tangent space at each point that varies smoothly from point to point). This gives, in particular, local notions of angle, length of curves, surface area and volume. From those, some other global quantities can be derived by integrating local contributions.

Riemannian geometry originated with the vision of Bernhard Riemann expressed in his inaugural lecture "Über die Hypothesen, welche der Geometrie zu Grunde liegen" ("On the Hypotheses on which Geometry is Based"). It is a very broad and abstract generalization of the differential geometry of surfaces in R. Development of Riemannian geometry resulted in synthesis of diverse results concerning the geometry of surfaces and the behavior of geodesics on them, with techniques that can be applied to the study of differentiable manifolds of higher dimensions. It enabled the formulation of Einstein's general theory of relativity, made profound impact on group theory and representation theory, as well as analysis, and spurred the development of algebraic and differential topology.

View the full Wikipedia page for Riemannian geometry
↑ Return to Menu

Inner product in the context of First fundamental form

In differential geometry, the first fundamental form is the inner product on the tangent space of a surface in three-dimensional Euclidean space which is induced canonically from the dot product of R. It permits the calculation of curvature and metric properties of a surface such as length and area in a manner consistent with the ambient space. The first fundamental form is denoted by the Roman numeral I,

View the full Wikipedia page for First fundamental form
↑ Return to Menu

Inner product in the context of Multilinear algebra

Multilinear algebra is the study of functions with multiple vector-valued arguments, with the functions being linear maps with respect to each argument. It involves concepts such as matrices, tensors, multivectors, systems of linear equations, higher-dimensional spaces, determinants, inner and outer products, and dual spaces. It is a mathematical tool used in engineering, machine learning, physics, and mathematics.

View the full Wikipedia page for Multilinear algebra
↑ Return to Menu

Inner product in the context of Metric tensor

In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold M (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p (that is, a bilinear function that maps pairs of tangent vectors to real numbers), and a metric field on M consists of a metric tensor at each point p of M that varies smoothly with p.

A metric tensor g is positive-definite if g(v, v) > 0 for every nonzero vector v. A manifold equipped with a positive-definite metric tensor is known as a Riemannian manifold. Such a metric tensor can be thought of as specifying infinitesimal distance on the manifold. On a Riemannian manifold M, the length of a smooth curve between two points p and q can be defined by integration, and the distance between p and q can be defined as the infimum of the lengths of all such curves; this makes M a metric space. Conversely, the metric tensor itself is the derivative of the distance function (taken in a suitable manner).

View the full Wikipedia page for Metric tensor
↑ Return to Menu

Inner product in the context of Unitary operator

In functional analysis, a unitary operator is a surjective bounded operator on a Hilbert space that preserves the inner product.Non-trivial examples include rotations, reflections, and the Fourier operator.Unitary operators generalize unitary matrices.Unitary operators are usually taken as operating on a Hilbert space, but the same notion serves to define the concept of isomorphism between Hilbert spaces.

View the full Wikipedia page for Unitary operator
↑ Return to Menu

Inner product in the context of Self-adjoint operator

In mathematics, a self-adjoint operator on a complex vector space with inner product is a linear map (from to itself) that is its own adjoint. That is, for all . If is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of is a Hermitian matrix, i.e., equal to its conjugate transpose . By the finite-dimensional spectral theorem, has an orthonormal basis such that the matrix of relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the Dirac–von Neumann formulation of quantum mechanics, in which physical observables such as position, momentum, angular momentum and spin are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian operator defined by

View the full Wikipedia page for Self-adjoint operator
↑ Return to Menu

Inner product in the context of Kernel trick

In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations (for example clusters, rankings, principal components, correlations, classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the representer theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.

Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick". Kernel functions have been introduced for sequence data, graphs, text, images, as well as vectors.

View the full Wikipedia page for Kernel trick
↑ Return to Menu