Computational complexity in the context of Memory hierarchy


Computational complexity in the context of Memory hierarchy

Computational complexity Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Computational complexity in the context of "Memory hierarchy"


⭐ Core Definition: Computational complexity

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.

The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory. Both areas are highly related, as the complexity of an algorithm is always an upper bound on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory.

↓ Menu
HINT:

👉 Computational complexity in the context of Memory hierarchy

In computer architecture, the memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower level programming constructs involving locality of reference.

Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and capabilities of each component. Each of the various components can be viewed as part of a hierarchy of memories (m1, m2, ..., mn) in which each member mi is typically smaller and faster than the next highest member mi+1 of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling for activating the transfer.

↓ Explore More Topics
In this Dossier

Computational complexity in the context of Models of computation

In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model that describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.

View the full Wikipedia page for Models of computation
↑ Return to Menu

Computational complexity in the context of Computational mathematics

Computational mathematics is the study of the interaction between mathematics and calculations done by a computer.

A large part of computational mathematics consists roughly of using mathematics for allowing and improving computer computation in areas of science and engineering where mathematics are useful. This involves in particular algorithm design, computational complexity, numerical methods and computer algebra.

View the full Wikipedia page for Computational mathematics
↑ Return to Menu

Computational complexity in the context of Polynomial time

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expressed as a function of the size of the input. Since this function is generally difficult to compute exactly, and the running time for small inputs is usually not consequential, one commonly focuses on the behavior of the complexity when the input size increases—that is, the asymptotic behavior of the complexity. Therefore, the time complexity is commonly expressed using big O notation, typically , , , , etc., where n is the size in units of bits needed to represent the input.

View the full Wikipedia page for Polynomial time
↑ Return to Menu

Computational complexity in the context of Computational complexity theory

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven Millennium Prize Problems, is part of the field of computational complexity.

View the full Wikipedia page for Computational complexity theory
↑ Return to Menu

Computational complexity in the context of Analysis of algorithms

In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.

The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.

View the full Wikipedia page for Analysis of algorithms
↑ Return to Menu

Computational complexity in the context of Transitive reduction

In the mathematical field of graph theory, a transitive reduction of a directed graph D is another directed graph with the same vertices and as few edges as possible, such that for all pairs of vertices v, w a (directed) path from v to w in D exists if and only if such a path exists in the reduction. Transitive reductions were introduced by Aho, Garey & Ullman (1972), who provided tight bounds on the computational complexity of constructing them.

More technically, the reduction is a directed graph that has the same reachability relation as D. Equivalently, D and its transitive reduction should have the same transitive closure as each other, and the transitive reduction of D should have as few edges as possible among all graphs with that property.

View the full Wikipedia page for Transitive reduction
↑ Return to Menu

Computational complexity in the context of Game semantics

Game semantics is an approach to formal semantics that grounds the concepts of truth or validity on game-theoretic concepts, such as the existence of a winning strategy for a player. In this framework, logical formulas are interpreted as defining games between two players. The term encompasses several related but distinct traditions, including dialogical logic (developed by Paul Lorenzen and Kuno Lorenz in Germany starting in the 1950s) and game-theoretical semantics (developed by Jaakko Hintikka in Finland).

Game semantics represents a significant departure from traditional model-theoretic approaches by emphasizing the dynamic, interactive nature of logical reasoning rather than static truth assignments. It provides intuitive interpretations for various logical systems, including classical logic, intuitionistic logic, linear logic, and modal logic. The approach bears conceptual resemblances to ancient Socratic dialogues, medieval theory of Obligationes, and constructive mathematics. Since the 1990s, game semantics has found important applications in theoretical computer science, particularly in the semantics of programming languages, concurrency theory, and the study of computational complexity.

View the full Wikipedia page for Game semantics
↑ Return to Menu

Computational complexity in the context of Satisfiability modulo theories

In computer science and mathematical logic, satisfiability modulo theories (SMT) is the problem of determining whether a mathematical formula is satisfiable. It generalizes the Boolean satisfiability problem (SAT) to more complex formulas involving real numbers, integers, and/or various data structures such as lists, arrays, bit vectors, and strings. The name is derived from the fact that these expressions are interpreted within ("modulo") a certain formal theory in first-order logic with equality (often disallowing quantifiers). SMT solvers are tools that aim to solve the SMT problem for a practical subset of inputs. SMT solvers such as Z3 and cvc5 have been used as a building block for a wide range of applications across computer science, including in automated theorem proving, program analysis, program verification, and software testing.

Since Boolean satisfiability is already NP-complete, the SMT problem is typically NP-hard, and for many theories it is undecidable. Researchers study which theories or subsets of theories lead to a decidable SMT problem and the computational complexity of decidable cases. The resulting decision procedures are often implemented directly in SMT solvers; see, for instance, the decidability of Presburger arithmetic. SMT can be thought of as a constraint satisfaction problem and thus a certain formalized approach to constraint programming.

View the full Wikipedia page for Satisfiability modulo theories
↑ Return to Menu

Computational complexity in the context of Lighting simulation

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

View the full Wikipedia page for Lighting simulation
↑ Return to Menu

Computational complexity in the context of Hidden-line removal

In 3D computer graphics, solid objects are usually modeled by polyhedra. A face of a polyhedron is a planar polygon bounded by straight line segments, called edges. Curved surfaces are usually approximated by a polygon mesh. Computer programs for line drawings of opaque objects must be able to decide which edges or which parts of the edges are hidden by an object itself or by other objects, so that those edges can be clipped during rendering. This problem is known as hidden-line removal.

The first known solution to the hidden-line problem was devised by L. G. Roberts in 1963. However, it severely restricts the model: it requires that all objects be convex. Ruth A. Weiss of Bell Labs documented her 1964 solution to this problem in a 1965 paper.In 1966 Ivan E. Sutherland listed 10 unsolved problems in computer graphics. Problem number seven was "hidden-line removal". In terms of computational complexity, this problem was solved by Frank Devai in 1986.

View the full Wikipedia page for Hidden-line removal
↑ Return to Menu

Computational complexity in the context of Discrete system

In theoretical computer science, a discrete system is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models.

A computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.

View the full Wikipedia page for Discrete system
↑ Return to Menu

Computational complexity in the context of Communication complexity

In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first introduced by Andrew Yao in 1979, while studying the problem of computation distributed among several machines.The problem is usually stated as follows: two parties (traditionally called Alice and Bob) each receive a (potentially different) -bit string and . The goal is for Alice to compute the value of a certain function, , that depends on both and , with the least amount of communication between them.

While Alice and Bob can always succeed by having Bob send his whole -bit string to Alice (who then computes the function ), the idea here is to find clever ways of calculating with fewer than bits of communication. Note that, unlike in computational complexity theory, communication complexity is not concerned with the amount of computation performed by Alice or Bob, or the size of the memory used, as we generally assume nothing about the computational power of either Alice or Bob.

View the full Wikipedia page for Communication complexity
↑ Return to Menu

Computational complexity in the context of Donald Knuth

Donald Ervin Knuth (/kəˈnθ/ kə-NOOTH; born January 10, 1938) is an American computer scientist and mathematician. He is a professor emeritus at Stanford University. He is the 1974 recipient of the ACM Turing Award, informally considered the Nobel Prize of computer science. Knuth has been called the "father of the analysis of algorithms".

Knuth is the author of the multi-volume work The Art of Computer Programming. He contributed to the development of the rigorous analysis of the computational complexity of algorithms and systematized formal mathematical techniques for it. In the process, he also popularized the asymptotic notation. In addition to fundamental contributions in several branches of theoretical computer science, Knuth is the creator of the TeX computer typesetting system, the related METAFONT font definition language and rendering system, and the Computer Modern family of typefaces.

View the full Wikipedia page for Donald Knuth
↑ Return to Menu