Linear transformation in the context of Vector addition


Linear transformation in the context of Vector addition

Linear transformation Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Linear transformation in the context of "Vector addition"


⭐ Core Definition: Linear transformation

In mathematics, and more specifically in linear algebra, a linear map (or linear mapping) is a particular kind of function between vector spaces, which respects the basic operations of vector addition and scalar multiplication. A standard example of a linear map is an matrix, which takes vectors in -dimensions into vectors in -dimensions in a way that is compatible with addition of vectors, and multiplication of vectors by scalars.

A linear map is a homomorphism of vector spaces. Thus, a linear map satisfies , where and are scalars, and and are vectors (elements of the vector space ). A linear mapping always maps the origin of to the origin of , and linear subspaces of onto linear subspaces in (possibly of a lower dimension); for example, it maps a plane through the origin in to either a plane through the origin in , a line through the origin in , or just the origin in . Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations.

↓ Menu
HINT:

In this Dossier

Linear transformation in the context of Lorentz transformation

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

The most common form of the transformation, parametrized by the real constant representing a velocity confined to the x-direction, is expressed aswhere (t, x, y, z) and (t′, x′, y′, z′) are the coordinates of an event in two frames with the spatial origins coinciding at t = t′ = 0, where the primed frame is seen from the unprimed frame as moving with speed v along the x-axis, where c is the speed of light, and is the Lorentz factor. When speed v is much smaller than c, the Lorentz factor is negligibly different from 1, but as v approaches c, grows without bound. The value of v must be smaller than c for the transformation to make sense.

View the full Wikipedia page for Lorentz transformation
↑ Return to Menu

Linear transformation in the context of Scaling (geometry)

In affine geometry, uniform scaling (or isotropic scaling) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a scale factor that is the same in all directions (isotropically). The result of uniform scaling is similar (in the geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes are also classed as similar. Uniform scaling happens, for example, when enlarging or reducing a photograph, or when creating a scale model of a building, car, airplane, etc.

More general is scaling with a separate scale factor for each axis direction. Non-uniform scaling (anisotropic scaling) is obtained when at least one of the scaling factors is different from the others; a special case is directional scaling or stretching (in one direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a rectangle, or into a parallelogram if the sides of the square are not parallel to the scaling axes (the angles between lines parallel to the axes are preserved, but not all angles). It occurs, for example, when a faraway billboard is viewed from an oblique angle, or when the shadow of a flat object falls on a surface that is not parallel to it.

View the full Wikipedia page for Scaling (geometry)
↑ Return to Menu

Linear transformation in the context of Transformation matrix

In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then there exists an matrix , called the transformation matrix of , such that:Note that has rows and columns, whereas the transformation is from to . There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors.

View the full Wikipedia page for Transformation matrix
↑ Return to Menu

Linear transformation in the context of Transformation (function)

In mathematics, a transformation, transform, or self-map is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. f: XX.Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations.

View the full Wikipedia page for Transformation (function)
↑ Return to Menu

Linear transformation in the context of Group representation

In the mathematical field of representation theory, group representations describe abstract groups in terms of bijective linear transformations of a vector space to itself (i.e. vector space automorphisms); in particular, they can be used to represent group elements as invertible matrices so that the group operation can be represented by matrix multiplication.

In chemistry, a group representation can relate mathematical group elements to symmetric rotations and reflections of molecules.

View the full Wikipedia page for Group representation
↑ Return to Menu

Linear transformation in the context of Coordinate vector

In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis. An easy example may be a position such as (5, 2, 1) in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.

The idea of a coordinate vector can also be used for infinite-dimensional vector spaces, as addressed below.

View the full Wikipedia page for Coordinate vector
↑ Return to Menu

Linear transformation in the context of Functional analysis

Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, inner product, norm, or topology) and the linear functions defined on these spaces and suitably respecting these structures. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining, for example, continuous or unitary operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.

The usage of the word functional as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach.

View the full Wikipedia page for Functional analysis
↑ Return to Menu

Linear transformation in the context of Projection (linear algebra)

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself (an endomorphism) such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once (i.e. is idempotent). It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

View the full Wikipedia page for Projection (linear algebra)
↑ Return to Menu

Linear transformation in the context of Square matrix

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if is a square matrix representing a rotation (rotation matrix) and is a column vector describing the position of a point in space, the product yields another column vector describing the position of that point after that rotation. If is a row vector, the same transformation can be obtained using , where is the transpose of .

View the full Wikipedia page for Square matrix
↑ Return to Menu

Linear transformation in the context of Improper rotation

In geometry, an improper rotation (also called rotation-reflection, rotoreflection, rotary reflection, or rotoinversion) is an isometry in Euclidean space that is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axis. Reflection and inversion are each a special case of improper rotation. Any improper rotation is an affine transformation and, in cases that keep the coordinate origin fixed, a linear transformation.It is used as a symmetry operation in the context of geometric symmetry, molecular symmetry and crystallography, where an object that is unchanged by a combination of rotation and reflection is said to have improper rotation symmetry.

It is important to note the distinction between rotary reflection and rotary inversion symmetry operations and their associated symmetry elements. Rotary reflections are generally used to describe the symmetry of individual molecules and are defined as a 360°/n rotation about an n-fold rotation axis followed by a reflection over a mirror plane perpendicular to the n-fold rotation axis. Rotoinversions are generally used to describe the symmetry of crystals and are defined as a 360°/n rotation about an n-fold rotation axis followed by an inversion through the origin. Although rotary reflection operations have a rotoinversion analogue and vice versa, rotoreflections and rotoinversions of the same order need not be identical. For example, a 6-fold rotoinversion axis and its associated with symmetry operations are distinct from those resulting from a 6-fold reflection axis.

View the full Wikipedia page for Improper rotation
↑ Return to Menu

Linear transformation in the context of Representation theory

Representation theory is a branch of mathematics that studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies modules over these abstract algebraic structures. In essence, a representation makes an abstract algebraic object more concrete by describing its elements by matrices and their algebraic operations (for example, matrix addition, matrix multiplication).

The algebraic objects amenable to such a description include groups, associative algebras and Lie algebras. The most prominent of these (and historically the first) is the representation theory of groups, in which elements of a group are represented by invertible matrices such that the group operation is matrix multiplication.

View the full Wikipedia page for Representation theory
↑ Return to Menu

Linear transformation in the context of Family of curves

In geometry, a family of curves is a set of curves, each of which is given by a function or parametrization in which one or more of the parameters is variable. In general, the parameter(s) influence the shape of the curve in a way that is more complicated than a simple linear transformation. Sets of curves given by an implicit relation may also represent families of curves.

Families of curves appear frequently in solutions of differential equations; when an additive constant of integration is introduced, it will usually be manipulated algebraically until it no longer represents a simple linear transformation.

View the full Wikipedia page for Family of curves
↑ Return to Menu

Linear transformation in the context of Fourier-related transform

This is a list of linear transformations of functions related to Fourier analysis. Such transformations map a function to a set of coefficients of basis functions, where the basis functions are sinusoidal and are therefore strongly localized in the frequency spectrum. (These transforms are generally designed to be invertible.) In the case of the Fourier transform, each basis function corresponds to a single frequency component.

View the full Wikipedia page for Fourier-related transform
↑ Return to Menu