Category: Analysis of algorithms

Pseudo-polynomial time
In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input (the largest integer present in the input)—
Branching factor
In computing, tree data structures, and game theory, the branching factor is the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calcul
Cache-oblivious distribution sort
The cache-oblivious distribution sort is a comparison-based sorting algorithm. It is similar to quicksort, but it is a cache-oblivious algorithm, designed for a setting where the number of elements to
Dovetailing (computer science)
Dovetailing, in algorithm design, is a technique that interweaves different computations, performing them essentially simultaneously. Algorithms that use dovetailing are sometimes referred to as dovet
Competitive analysis (online algorithm)
Competitive analysis is a method invented for analyzing online algorithms, in which the performance of an online algorithm (which must satisfy an unpredictable sequence of requests, completing each re
Randomized algorithm
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its beha
Galactic algorithm
A galactic algorithm is one that outperforms any other algorithm for problems that are sufficiently large, but where "sufficiently large" is so big that the algorithm is never used in practice. Galact
Deterministic algorithm
In computer science, a deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence
Accounting method (computer science)
In the field of analysis of algorithms in computer science, the accounting method is a method of amortized analysis based on accounting. The accounting method often gives a more intuitive account of t
Bidimensionality
Bidimensionality theory characterizes a broad range of graph problems (bidimensional) that admit efficient approximate, fixed-parameter or kernel solutions in a broad range of graphs. These graph clas
Master theorem (analysis of algorithms)
In the analysis of algorithms, the master theorem for divide-and-conquer recurrences provides an asymptotic analysis (using Big O notation) for recurrence relations of types that occur in the analysis
Funnelsort
Funnelsort is a comparison-based sorting algorithm. It is similar to mergesort, but it is a cache-oblivious algorithm, designed for a setting where the number of elements to sort is too large to fit i
Big O notation
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a invented by Paul Bac
The Art of Computer Programming
The Art of Computer Programming (TAOCP) is a comprehensive monograph written by the computer scientist Donald Knuth presenting programming algorithms and their analysis. Volumes 1–5 are intended to re
Polylogarithmic function
In mathematics, a polylogarithmic function in n is a polynomial in the logarithm of n, The notation logkn is often used as a shorthand for (log n)k, analogous to sin2θ for (sin θ)2. In computer scienc
Computational complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured
Memory-bound function
Memory bound refers to a situation in which the time to complete a given computational problem is decided primarily by the amount of free memory required to hold the working data. This is in contrast
Combinatorial search
In computer science and artificial intelligence, combinatorial search studies search algorithms for solving instances of problems that are believed to be hard in general, by efficiently exploring the
Entropy compression
In mathematics and theoretical computer science, entropy compression is an information theoretic method for proving that a random process terminates, originally used by Robin Moser to prove an algorit
Polynomial delay
In the analysis of algorithms, an enumeration algorithm (i.e., an algorithm for listing a large or infinite collection of structures) is said to have polynomial delay if the time between the output of
Oblivious RAM
An oblivious RAM (ORAM) simulator is a compiler that transforms algorithms in such a way that the resulting algorithms preserve the input-output behavior of the original algorithm but the distribution
Reservoir sampling
Reservoir sampling is a family of randomized algorithms for choosing a simple random sample, without replacement, of k items from a population of unknown size n in a single pass over the items. The si
Probabilistic analysis of algorithms
In analysis of algorithms, probabilistic analysis of algorithms is an approach to estimate the computational complexity of an algorithm or a computational problem. It starts from an assumption about a
Half-exponential function
In mathematics, a half-exponential function is a functional square root of an exponential function. That is, a function such that composed with itself results in an exponential function: for some cons
Cache-oblivious algorithm
In computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a processor cache without having the size of the cache (or the length of the c
List update problem
The List Update or the List Access problem is a simple model used in the study of competitive analysis of online algorithms. Given a set of items in a list where the cost of accessing an item is propo
Instruction path length
In computer performance, the instruction path length is the number of machine code instructions required to execute a section of a computer program. The total path length for the entire program could
Mem (computing)
In computational complexity theory, computing efficiency, combinatorial optimization, supercomputing, computational cost (algorithmic efficiency) and other computational metrics, the mem is a measurem
External memory algorithm
In computing, external memory algorithms or out-of-core algorithms are algorithms that are designed to process data that are too large to fit into a computer's main memory at once. Such algorithms mus
Potential method
In computational complexity theory, the potential method is a method used to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operati
Asymptotically optimal algorithm
In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor (independent of the input size) worse than the best
Empirical algorithmics
In computer science, empirical algorithmics (or experimental algorithmics) is the practice of using empirical methods to study the behavior of algorithms. The practice combines algorithm development a
Iterative compression
In computer science, iterative compression is an algorithmic technique for the design of fixed-parameter tractable algorithms, in which one element (such as a vertex of a graph) is added to the proble
Kernelization
In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller
Time complexity
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the
Charging argument
In computer science, a charging argument is used to compare the output of an optimization algorithm to an optimal solution. It is typically used to show that an algorithm produces optimal results by p
Output-sensitive algorithm
In computer science, an output-sensitive algorithm is an algorithm whose running time depends on the size of the output, instead of, or in addition to, the size of the input. For certain problems wher
Amortized analysis
In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortize
Worst-case complexity
In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary
Best, worst and average case
In computer science, best, worst, and average cases of a given algorithm express what the resource usage is at least, at most and on average, respectively. Usually the resource being considered is run
Adversary model
In computer science, an online algorithm measures its competitiveness against different adversary models. For deterministic algorithms, the adversary is the same as the adaptive offline adversary. For
Analysis of algorithms
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, th
Algorithmic efficiency
In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its r
Klee–Minty cube
The Klee–Minty cube or Klee–Minty polytope (named after Victor Klee and George J. Minty) is a unit hypercube of variable dimension whose corners have been perturbed. Klee and Minty demonstrated that G