Matrix (mathematics) in the context of S-matrix


Matrix (mathematics) in the context of S-matrix

Matrix (mathematics) Study page number 1 of 4

Play TriviaQuestions Online!

or

Skip to study material about Matrix (mathematics) in the context of "S-matrix"


⭐ Core Definition: Matrix (mathematics)

In mathematics, a matrix (pl.: matrices) is a rectangular array of numbers or other mathematical objects with elements or entries arranged in rows and columns, usually satisfying certain properties of addition and multiplication.

For example, denotes a matrix with two rows and three columns. This is often referred to as a "two-by-three matrix", a 2 × 3 matrix, or a matrix of dimension 2 × 3.

↓ Menu
HINT:

In this Dossier

Matrix (mathematics) in the context of List of mathematical symbols

A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula or a mathematical expression. More formally, a mathematical symbol is any grapheme used in mathematical formulas and expressions. As formulas and expressions are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics.

The most basic symbols are the decimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), and the letters of the Latin alphabet. The decimal digits are used for representing numbers through the Hindu–Arabic numeral system. Historically, upper-case letters were used for representing points in geometry, and lower-case letters were used for variables and constants. Letters are used for representing many other types of mathematical object. As the number of these types has increased, the Greek alphabet and some Hebrew letters have also come to be used. For more symbols, other typefaces are also used, mainly boldface , script typeface (the lower-case script face is rarely used because of the possible confusion with the standard face), German fraktur , and blackboard bold (the other letters are rarely used in this face, or their use is unconventional). It is commonplace to use alphabets, fonts and typefaces to group symbols by type (for example, boldface is often used for vectors and uppercase for matrices).

View the full Wikipedia page for List of mathematical symbols
↑ Return to Menu

Matrix (mathematics) in the context of Addition

Addition, usually denoted with the plus sign +, is one of the four basic operations of arithmetic, the other three being subtraction, multiplication, and division. The addition of two whole numbers results in the total or sum of those values combined. For example, the adjacent image shows two columns of apples, one with three apples and the other with two apples, totaling to five apples. This observation is expressed as "3 + 2 = 5", which is read as "three plus two equals five".

Besides counting items, addition can also be defined and executed without referring to concrete objects, using abstractions called numbers instead, such as integers, real numbers, and complex numbers. Addition belongs to arithmetic, a branch of mathematics. In algebra, another area of mathematics, addition can also be performed on abstract objects such as vectors, matrices, and elements of additive groups.

View the full Wikipedia page for Addition
↑ Return to Menu

Matrix (mathematics) in the context of Product (mathematics)

In mathematics, a product is the result of multiplication, or an expression that identifies objects (numbers or variables) to be multiplied, called factors. For example, 21 is the product of 3 and 7 (the result of multiplication), and is the product of and (indicating that the two factors should be multiplied together).When one factor is an integer, the product is called a multiple.

The order in which real or complex numbers are multiplied has no bearing on the product; this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, the product usually depends on the order of the factors. Matrix multiplication, for example, is non-commutative, and so is multiplication in other algebras in general as well.

View the full Wikipedia page for Product (mathematics)
↑ Return to Menu

Matrix (mathematics) in the context of Transformation matrix

In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then there exists an matrix , called the transformation matrix of , such that:Note that has rows and columns, whereas the transformation is from to . There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors.

View the full Wikipedia page for Transformation matrix
↑ Return to Menu

Matrix (mathematics) in the context of Function of a real variable

In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers , or a subset of that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.

Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of -vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an -algebra, such as the complex numbers or the quaternions. The structure -vector space of the codomain induces a structure of -vector space on the functions. If the codomain has a structure of -algebra, the same is true for the functions.

View the full Wikipedia page for Function of a real variable
↑ Return to Menu

Matrix (mathematics) in the context of Summation

In mathematics, summation is the addition of a sequence of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.

Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article.

View the full Wikipedia page for Summation
↑ Return to Menu

Matrix (mathematics) in the context of Mathematical economics

Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. Often, these applied methods are beyond simple geometry, and may include differential and integral calculus, difference and differential equations, matrix algebra, mathematical optimization, or other computational methods. Proponents of this approach claim that it allows the formulation of theoretical relationships with rigor, generality, and simplicity.

Mathematics allows economists to form meaningful, testable propositions about wide-ranging and complex subjects which could less easily be expressed informally. Further, the language of mathematics allows economists to make specific, positive claims about controversial or contentious subjects that would be impossible without mathematics. Much of economic theory is currently presented in terms of mathematical economic models, a set of stylized and simplified mathematical relationships asserted to clarify assumptions and implications.

View the full Wikipedia page for Mathematical economics
↑ Return to Menu

Matrix (mathematics) in the context of Linear map

In mathematics, and more specifically in linear algebra, a linear map (or linear mapping) is a particular kind of function between vector spaces, which respects the basic operations of vector addition and scalar multiplication. A standard example of a linear map is an matrix, which takes vectors in -dimensions into vectors in -dimensions in a way that is compatible with addition of vectors, and multiplication of vectors by scalars.

A linear map is a homomorphism of vector spaces. Thus, a linear map satisfies , where and are scalars, and and are vectors (elements of the vector space ). A linear mapping always maps the origin of to the origin of , and linear subspaces of onto linear subspaces in (possibly of a lower dimension); for example, it maps a plane through the origin in to either a plane through the origin in , a line through the origin in , or just the origin in . Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations.

View the full Wikipedia page for Linear map
↑ Return to Menu

Matrix (mathematics) in the context of Linear space

In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called vectors, can be added together and multiplied ("scaled") by numbers called scalars. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field.

Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities (such as forces and velocity) that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrices, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear equations.

View the full Wikipedia page for Linear space
↑ Return to Menu

Matrix (mathematics) in the context of Stability theory

In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using L norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.

In dynamical systems, an orbit is called Lyapunov stable if the forward orbit of any point is in a small enough neighborhood or it stays in a small (but perhaps, larger) neighborhood. Various criteria have been developed to prove stability or instability of an orbit. Under favorable circumstances, the question may be reduced to a well-studied problem involving eigenvalues of matrices. A more general method involves Lyapunov functions. In practice, any one of a number of different stability criteria are applied.

View the full Wikipedia page for Stability theory
↑ Return to Menu

Matrix (mathematics) in the context of Coordinate vector

In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis. An easy example may be a position such as (5, 2, 1) in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.

The idea of a coordinate vector can also be used for infinite-dimensional vector spaces, as addressed below.

View the full Wikipedia page for Coordinate vector
↑ Return to Menu

Matrix (mathematics) in the context of Distribution (logic)

In mathematics, the distributive property of binary operations is a generalization of the distributive law, which asserts that the equalityis always true in elementary algebra.For example, in elementary arithmetic, one hasTherefore, one would say that multiplication distributes over addition.

This basic property of numbers is part of the definition of most algebraic structures that have two operations called addition and multiplication, such as complex numbers, polynomials, matrices, rings, and fields. It is also encountered in Boolean algebra and mathematical logic, where each of the logical and (denoted ) and the logical or (denoted ) distributes over the other.

View the full Wikipedia page for Distribution (logic)
↑ Return to Menu