Calculus of variations in the context of Trajectory optimization


Calculus of variations in the context of Trajectory optimization

Calculus of variations Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Calculus of variations in the context of "Trajectory optimization"


⭐ Core Definition: Calculus of variations

The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in functionsand functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, which depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action.

↓ Menu
HINT:

👉 Calculus of variations in the context of Trajectory optimization

Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).

Although the idea of trajectory optimization has been around for hundreds of years (calculus of variations, brachystochrone problem), it only became practical for real-world problems with the advent of the computer. Many of the original applications of trajectory optimization were in the aerospace industry, computing rocket and missile launch trajectories. More recently, trajectory optimization has also been used in a wide variety of industrial process and robotics applications.

↓ Explore More Topics
In this Dossier

Calculus of variations in the context of David Hilbert

David Hilbert (/ˈhɪlbərt/; German: [ˈdaːvɪt ˈhɪlbɐt]; 23 January 1862 – 14 February 1943) was a German mathematician and philosopher of mathematics and one of the most influential mathematicians of his time.

Hilbert discovered and developed a broad range of fundamental ideas including invariant theory, the calculus of variations, commutative algebra, algebraic number theory, the foundations of geometry, spectral theory of operators and its application to integral equations, mathematical physics, and the foundations of mathematics (particularly proof theory). He adopted and defended Georg Cantor's set theory and transfinite numbers. In 1900, he presented a collection of problems that set a course for mathematical research of the 20th century.

View the full Wikipedia page for David Hilbert
↑ Return to Menu

Calculus of variations in the context of Analytical mechanics

In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation.

Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems.

View the full Wikipedia page for Analytical mechanics
↑ Return to Menu

Calculus of variations in the context of Hans Hahn (mathematician)

Hans Hahn (/hɑːn/; German: [haːn]; 27 September 1879 – 24 July 1934) was an Austrian mathematician and philosopher who made contributions to functional analysis, topology, set theory, the calculus of variations, real analysis, and order theory. In philosophy he was among the main logical positivists of the Vienna Circle.

View the full Wikipedia page for Hans Hahn (mathematician)
↑ Return to Menu

Calculus of variations in the context of Johann Radon

Johann Karl August Radon ([ˈʁaːdɔn]; 16 December 1887 – 25 May 1956) was an Austrian mathematician. His doctoral dissertation was on the calculus of variations (in 1910, at the University of Vienna).

View the full Wikipedia page for Johann Radon
↑ Return to Menu

Calculus of variations in the context of Functional analysis

Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, inner product, norm, or topology) and the linear functions defined on these spaces and suitably respecting these structures. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining, for example, continuous or unitary operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.

The usage of the word functional as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach.

View the full Wikipedia page for Functional analysis
↑ Return to Menu

Calculus of variations in the context of Jacob Bernoulli

Jacob Bernoulli (also known as James in English or Jacques in French; 6 January 1655 [O.S. 27 December 1654] – 16 August 1705) was a Swiss mathematician. He sided with Gottfried Wilhelm Leibniz during the Leibniz–Newton calculus controversy and was an early proponent of Leibnizian calculus, to which he made numerous contributions. A member of the Bernoulli family, he, along with his brother Johann, was one of the founders of the calculus of variations. He also discovered the fundamental mathematical constant e. However, his most important contribution was in the field of probability, where he derived the first version of the law of large numbers in his work Ars Conjectandi.

View the full Wikipedia page for Jacob Bernoulli
↑ Return to Menu

Calculus of variations in the context of Optimal control

Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. Optimal control can be seen as a control strategy in control theory.

View the full Wikipedia page for Optimal control
↑ Return to Menu

Calculus of variations in the context of John Forbes Nash Jr.

John Forbes Nash Jr. (June 13, 1928 – May 23, 2015), known and published as John Nash, was an American mathematician who made fundamental contributions to game theory, real algebraic geometry, differential geometry, and partial differential equations. Nash and fellow game theorists John Harsanyi and Reinhard Selten were awarded the 1994 Nobel Prize in Economics. In 2015, Louis Nirenberg and he were awarded the Abel Prize for their contributions to the field of partial differential equations.

As a graduate student in the Princeton University Department of Mathematics, Nash introduced a number of concepts (including the Nash equilibrium and the Nash bargaining solution), which are now considered central to game theory and its applications in various sciences. In the 1950s, Nash discovered and proved the Nash embedding theorems by solving a system of nonlinear partial differential equations arising in Riemannian geometry. This work, also introducing a preliminary form of the Nash–Moser theorem, was later recognized by the American Mathematical Society with the Leroy P. Steele Prize for Seminal Contribution to Research. Ennio De Giorgi and Nash found, with separate methods, a body of results paving the way for a systematic understanding of elliptic and parabolic partial differential equations. Their De Giorgi–Nash theorem on the smoothness of solutions of such equations resolved Hilbert's nineteenth problem on regularity in the calculus of variations, which had been a well-known open problem for almost 60 years.

View the full Wikipedia page for John Forbes Nash Jr.
↑ Return to Menu

Calculus of variations in the context of Finite element analysis

Finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. Computers are usually used to perform the calculations required. With high-speed supercomputers, better solutions can be achieved and are often required to solve the largest and most complex problems.

FEM is a general numerical method for solving partial differential equations in two- or three-space variables (i.e., some boundary value problems). There are also studies about using FEM to solve high-dimensional problems. To solve a problem, FEM subdivides a large system into smaller, simpler parts called finite elements. This is achieved by a particular space discretization in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution that has a finite number of points. FEM formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then approximates a solution by minimizing an associated error function via the calculus of variations.

View the full Wikipedia page for Finite element analysis
↑ Return to Menu

Calculus of variations in the context of Virtual work

In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action.

Historically, virtual work and the associated calculus of variations were formulated to analyze systems of rigid bodies, but they have also been developed for the study of the mechanics of deformable bodies.

View the full Wikipedia page for Virtual work
↑ Return to Menu

Calculus of variations in the context of Functional (mathematics)

This article is mainly concerned with the second concept, which arose in the early 18th century as part of the calculus of variations. The first concept, which is more modern and abstract, is discussed in detail in a separate article, under the name linear form. The third concept is detailed in the computer science article on higher-order functions.

View the full Wikipedia page for Functional (mathematics)
↑ Return to Menu

Calculus of variations in the context of Euler–Lagrange equation

In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.

Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero. In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field.

View the full Wikipedia page for Euler–Lagrange equation
↑ Return to Menu