Category: Algorithmic information theory

Linear partial information
Linear partial information (LPI) is a method of making decisions based on insufficient or fuzzy information. LPI was introduced in 1970 by Polish–Swiss mathematician Edward Kofler (1911–2007) to simpl
Queap
In computer science, a queap is a priority queue data structure. The data structure allows insertions and deletions of arbitrary elements, as well as retrieval of the highest-priority element. Each de
Algorithm aversion
Algorithm aversion is "biased assessment of an algorithm which manifests in negative behaviours and attitudes towards the algorithm compared to a human agent." It describes a phenomenon where humans r
Chaitin's constant
In the computer science subfield of algorithmic information theory, a Chaitin constant (Chaitin omega number) or halting probability is a real number that, informally speaking, represents the probabil
Pseudorandom ensemble
In cryptography, a pseudorandom ensemble is a family of variables meeting the following criteria: Let be a uniform ensembleand be an ensemble. The ensemble is called pseudorandom if and are indistingu
Algorithmic probability
In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by R
Solomonoff's theory of inductive inference
Solomonoff's theory of inductive inference is a mathematical proof that if a universe is generated by an algorithm, then observations of that universe, encoded as a dataset, are best predicted by the
Iota and Jot
In formal language theory and computer science, Iota and Jot (from Greek iota ι, Hebrew yodh י, the smallest letters in those two alphabets) are languages, extremely minimalist formal systems, designe
Berry paradox
The Berry paradox is a self-referential paradox arising from an expression like "The smallest positive integer not definable in under sixty letters" (a phrase with fifty-seven letters). Bertrand Russe
Algorithmic information theory
Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as oppo
Randomness test
A randomness test (or test for randomness), in data evaluation, is a test used to analyze the distribution of a set of data to see if it can be described as random (patternless). In stochastic modelin
Pseudorandom generator
In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string s
Kolmogorov structure function
In 1973, Andrey Kolmogorov proposed a non-probabilistic approach to statistics and model selection. Let each datum be a finite binary string and a model be a finite set of binary strings. Consider mod
Algorithmically random sequence
Intuitively, an algorithmically random sequence (or random sequence) is a sequence of binary digits that appears random to any algorithm running on a (prefix-free or not) universal Turing machine. The
Kolmogorov complexity
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a
Minimum message length
Minimum message length (MML) is a Bayesian information-theoretic method for statistical model comparison and selection. It provides a formal information theory restatement of Occam's Razor: even when
Minimum description length
Minimum Description Length (MDL) is a model selection principle where the shortest description of the data is the best model. MDL methods learn through a data compression perspective and are sometimes
Universality probability
Universality probability is an abstruse probability measure in computational complexity theory that concerns universal Turing machines.
Binary combinatory logic
Binary combinatory logic (BCL) is a computer programming language that uses binary terms 0 and 1 to create a complete formulation of combinatory logic using only the symbols 0 and 1. Using the S and K
Computational indistinguishability
In computational complexity and cryptography, two families of distributions are computationally indistinguishable if no efficient algorithm can tell the difference between them except with negligible
K-trivial set
In mathematics, a set of natural numbers is called a K-trivial set if its initial segments viewed as binary strings are easy to describe: the prefix-free Kolmogorov complexity is as low as possible, c
Chain rule for Kolmogorov complexity
The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states: That is, the combined randomness of two sequences X and Y is the sum of the randomness