Objective function in the context of Optimal job scheduling


Objective function in the context of Optimal job scheduling

Objective function Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Objective function in the context of "Optimal job scheduling"


⭐ Core Definition: Objective function

In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.

In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss.

↓ Menu
HINT:

👉 Objective function in the context of Optimal job scheduling

Optimal job scheduling is a class of optimization problems related to scheduling. The inputs to such problems are a list of jobs (also called processes or tasks) and a list of machines (also called processors or workers). The required output is a schedule – an assignment of jobs to machines. The schedule should optimize a certain objective function. In the literature, problems of optimal job scheduling are often called machine scheduling, processor scheduling, multiprocessor scheduling, load balancing, or just scheduling.

There are many different problems of optimal job scheduling, different in the nature of jobs, the nature of machines, the restrictions on the schedule, and the objective function. A convenient notation for optimal scheduling problems was introduced by Ronald Graham, Eugene Lawler, Jan Karel Lenstra and Alexander Rinnooy Kan. It consists of three fields: α, β and γ. Each field may be a comma separated list of words. The α field describes the machine environment, β the job characteristics and constraints, and γ the objective function. Since its introduction in the late 1970s the notation has been constantly extended, sometimes inconsistently. As a result, today there are some problems that appear with distinct notations in several papers.

↓ Explore More Topics
In this Dossier

Objective function in the context of Utility

In economics, utility is a measure of a certain person's satisfaction from a certain state of the world. Over time, the term has been used with at least two meanings.

The relationship between these two kinds of utility functions has been a source of controversy among both economists and ethicists, with most maintaining that the two are distinct but generally related.

View the full Wikipedia page for Utility
↑ Return to Menu

Objective function in the context of Productive efficiency

In microeconomic theory, productive efficiency (or production efficiency) is a situation in which the economy or an economic system (e.g., bank, hospital, industry, country) operating within the constraints of current industrial technology cannot increase production of one good without sacrificing production of another good. In simple terms, the concept is illustrated on a production possibility frontier (PPF), where all points on the curve are points of productive efficiency. An equilibrium may be productively efficient without being allocatively efficient — i.e. it may result in a distribution of goods where social welfare is not maximized (bearing in mind that social welfare is a nebulous objective function subject to political controversy).

Productive efficiency is an aspect of economic efficiency that focuses on how to maximize output of a chosen product portfolio, without concern for whether your product portfolio is making goods in the right proportion; in misguided application, it will aid in manufacturing the wrong basket of outputs faster and cheaper than ever before.

View the full Wikipedia page for Productive efficiency
↑ Return to Menu

Objective function in the context of Integer programming

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear.

Integer programming is NP-complete (the difficult part is showing the NP membership). In particular, the special case of 0–1 integer linear programming, in which unknowns are binary, and only the restrictions must be satisfied, is one of Karp's 21 NP-complete problems.

View the full Wikipedia page for Integer programming
↑ Return to Menu

Objective function in the context of Continuous optimization

Continuous optimization is a branch of optimization in applied mathematics.

As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—that is, to be chosen from a set of real values between which there are no gaps (values from intervals of the real line). Because of this continuity assumption, continuous optimization allows the use of calculus techniques.

View the full Wikipedia page for Continuous optimization
↑ Return to Menu

Objective function in the context of Feasible region

In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.

For example, consider the problem of minimizing the function with respect to the variables and subject to and Here the feasible set is the set of pairs (x, y) in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12. The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is

View the full Wikipedia page for Feasible region
↑ Return to Menu

Objective function in the context of Linear programming

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization).

More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polytope. A linear programming algorithm finds a point in the polytope where this function has the largest (or smallest) value if such a point exists.

View the full Wikipedia page for Linear programming
↑ Return to Menu

Objective function in the context of Fitness function

A fitness function is a particular type of objective or cost function that is used to summarize, as a single figure of merit, how close a given candidate solution is to achieving the set aims. It is an important component of evolutionary algorithms (EA), such as genetic programming, evolution strategies or genetic algorithms. An EA is a metaheuristic that reproduces the basic principles of biological evolution as a computer algorithm in order to solve challenging optimization or planning tasks, at least approximately. For this purpose, many candidate solutions are generated, which are evaluated using a fitness function in order to guide the evolutionary development towards the desired goal. Similar quality functions are also used in other metaheuristics, such as ant colony optimization or particle swarm optimization.

In the field of EAs, each candidate solution, also called an individual, is commonly represented as a string of numbers (referred to as a chromosome). After each round of testing or simulation the idea is to delete the n worst individuals, and to breed n new ones from the best solutions. Each individual must therefore to be assigned a quality number indicating how close it has come to the overall specification, and this is generated by applying the fitness function to the test or simulation results obtained from that candidate solution.

View the full Wikipedia page for Fitness function
↑ Return to Menu

Objective function in the context of Optimal control

Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. Optimal control can be seen as a control strategy in control theory.

View the full Wikipedia page for Optimal control
↑ Return to Menu

Objective function in the context of Simulated annealing

Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. For large numbers of local optima, SA can find the global optimum. It is often used when the search space is discrete (for example the traveling salesman problem, the boolean satisfiability problem, protein structure prediction, and job-shop scheduling). For problems where a fixed amount of computing resource is available, finding an approximate global optimum may be more relevant than attempting to find a precise local optimum. In such cases, SA may be preferable to exact algorithms such as gradient descent or branch and bound. The problems solved by SA are currently formulated by an objective function of many variables, subject to several mathematical constraints. In practice, a constraint violation can be penalized as part of the objective function.

Similar techniques have been independently introduced on several occasions, including Pincus (1970), Khachaturyan et al. (1979, 1981), Kirkpatrick, Gelatt and Vecchi (1983), and Cerny (1985). In 1983, this approach was used by Kirkpatrick, Gelatt Jr., and Vecchi for a solution of the traveling salesman problem. They also proposed its current name, simulated annealing.

View the full Wikipedia page for Simulated annealing
↑ Return to Menu

Objective function in the context of Parameter space

The parameter space is the space of all possible parameter values that define a particular mathematical model. It is also sometimes called weight space, and is often a subset of finite-dimensional Euclidean space.

In statistics, parameter spaces are particularly useful for describing parametric families of probability distributions. They also form the background for parameter estimation. In the case of extremum estimators for parametric models, a certain objective function is maximized or minimized over the parameter space. Theorems of existence and consistency of such estimators require some assumptions about the topology of the parameter space. For instance, compactness of the parameter space, together with continuity of the objective function, suffices for the existence of an extremum estimator.

View the full Wikipedia page for Parameter space
↑ Return to Menu

Objective function in the context of Vector optimization

Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A multi-objective optimization problem is a special case of a vector optimization problem: The objective space is the finite dimensional Euclidean space partially ordered by the component-wise "less than or equal to" ordering.

View the full Wikipedia page for Vector optimization
↑ Return to Menu