Heuristic (computer science) in the context of Space complexity


Heuristic (computer science) in the context of Space complexity

Heuristic (computer science) Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Heuristic (computer science) in the context of "Space complexity"


⭐ Core Definition: Heuristic (computer science)

In mathematical optimization and computer science, heuristic (from Greek εὑρίσκω eurísko "I find, discover") is a technique designed for problem solving more quickly when classic methods are too slow for finding an exact or approximate solution, or when classic methods fail to find any exact solution in a search space. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut.

A heuristic function, also simply called a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.

↓ Menu
HINT:

In this Dossier

Heuristic (computer science) in the context of Algorithm

In mathematics and computer science, an algorithm (/ˈælɡərɪðəm/ ) is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning).

In contrast, a heuristic is an approach to solving problems without well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.

View the full Wikipedia page for Algorithm
↑ Return to Menu

Heuristic (computer science) in the context of Computational phylogenetics

Computational phylogenetics, phylogeny inference, or phylogenetic inference focuses on computational and optimization algorithms, heuristics, and approaches involved in phylogenetic analyses. The goal is to find a phylogenetic tree representing optimal evolutionary ancestry between a set of genes, species, or taxa. Maximum likelihood, parsimony, Bayesian, and minimum evolution are typical optimality criteria used to assess how well a phylogenetic tree topology describes the sequence data. Nearest Neighbour Interchange (NNI), Subtree Prune and Regraft (SPR), and Tree Bisection and Reconnection (TBR), known as tree rearrangements, are deterministic algorithms to search for optimal or the best phylogenetic tree. The space and the landscape of searching for the optimal phylogenetic tree is known as phylogeny search space.

Maximum Likelihood (also likelihood) optimality criterion is the process of finding the tree topology along with its branch lengths that provides the highest probability observing the sequence data, while parsimony optimality criterion is the fewest number of state-evolutionary changes required for a phylogenetic tree to explain the sequence data.

View the full Wikipedia page for Computational phylogenetics
↑ Return to Menu

Heuristic (computer science) in the context of Approximation algorithms

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. This distinguishes them from heuristics such as annealing or genetic algorithms, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail.

View the full Wikipedia page for Approximation algorithms
↑ Return to Menu

Heuristic (computer science) in the context of Metaheuristics

In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems. Their use is always of interest when exact or other (approximate) methods are not available or are not expedient, either because the calculation time is too long or because, for example, the solution provided is too imprecise.

Compared to optimization algorithms and iterative methods, metaheuristics do not guarantee that a globally optimal solution can be found on some class of problems. Many metaheuristics implement some form of stochastic optimization, so that the solution found is dependent on the set of random variables generated. In combinatorial optimization, there are many problems that belong to the class of NP-complete problems and thus can no longer be solved exactly in an acceptable time from a relatively low degree of complexity. Metaheuristics then often provide good solutions with less computational effort than approximation methods, iterative methods, or simple heuristics. This also applies in the field of continuous or mixed-integer optimization. As such, metaheuristics are useful approaches for optimization problems. Several books and survey papers have been published on the subject. Literature review on metaheuristic optimization, suggested that it was Fred Glover who coined the word metaheuristics.

View the full Wikipedia page for Metaheuristics
↑ Return to Menu