Simulated annealing in the context of Job-shop scheduling


Simulated annealing in the context of Job-shop scheduling

Simulated annealing Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Simulated annealing in the context of "Job-shop scheduling"


⭐ Core Definition: Simulated annealing

Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. For large numbers of local optima, SA can find the global optimum. It is often used when the search space is discrete (for example the traveling salesman problem, the boolean satisfiability problem, protein structure prediction, and job-shop scheduling). For problems where a fixed amount of computing resource is available, finding an approximate global optimum may be more relevant than attempting to find a precise local optimum. In such cases, SA may be preferable to exact algorithms such as gradient descent or branch and bound. The problems solved by SA are currently formulated by an objective function of many variables, subject to several mathematical constraints. In practice, a constraint violation can be penalized as part of the objective function.

Similar techniques have been independently introduced on several occasions, including Pincus (1970), Khachaturyan et al. (1979, 1981), Kirkpatrick, Gelatt and Vecchi (1983), and Cerny (1985). In 1983, this approach was used by Kirkpatrick, Gelatt Jr., and Vecchi for a solution of the traveling salesman problem. They also proposed its current name, simulated annealing.

↓ Menu
HINT:

In this Dossier

Simulated annealing in the context of Approximation algorithms

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. This distinguishes them from heuristics such as annealing or genetic algorithms, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail.

View the full Wikipedia page for Approximation algorithms
↑ Return to Menu