Category: Coding theory

Polar code (coding theory)
In information theory, a polar code is a linear block error-correcting code. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical c
Cyclic code
In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties
Hamming distance
In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum num
Gibbs' inequality
In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are deriv
Punctured code
In coding theory, puncturing is the process of removing some of the parity bits after encoding with an error-correction code. This has the same effect as encoding with an error-correction code with a
Arbitrarily varying channel
An arbitrarily varying channel (AVC) is a communication channel model used in coding theory, and was first introduced by Blackwell, Breiman, and Thomasian. This particular channel has unknown paramete
Package-merge algorithm
The package-merge algorithm is an O(nL)-time algorithm for finding an optimal length-limited Huffman code for a given distribution on a given alphabet of size n, where no code word is longer than L. I
Multiple description coding
Multiple description coding (MDC) in computing is a coding technique that fragments a single media stream into n substreams (n ≥ 2) referred to as descriptions. The packets of each description are rou
Forney algorithm
In coding theory, the Forney algorithm (or Forney's algorithm) calculates the error values at known error locations. It is used as one of the steps in decoding BCH codes and Reed–Solomon codes (a subc
Kraft–McMillan inequality
In coding theory, the Kraft–McMillan inequality gives a necessary and sufficient condition for the existence of a prefix code (in Leon G. Kraft's version) or a uniquely decodable code (in Brockway McM
Plotkin bound
In the mathematics of coding theory, the Plotkin bound, named after Morris Plotkin, is a limit (or bound) on the maximum possible number of codewords in binary codes of given length n and given minimu
BCH code
In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called Galois field).
Online codes
In computer science, online codes are an example of rateless erasure codes. These codes can encode a message into a number of symbols such that knowledge of any fraction of them allows one to recover
Line code
In telecommunication, a line code is a pattern of voltage, current, or photons used to represent digital data transmitted down a communication channel or written to a storage medium. This repertoire o
Linear code
In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutio
Rank error-correcting code
In coding theory, rank codes (also called Gabidulin codes) are non-binary linear error-correcting codes over not Hamming but rank metric. They described a systematic way of building codes that could d
Sardinas–Patterson algorithm
In coding theory, the Sardinas–Patterson algorithm is a classical algorithm for determining in polynomial time whether a given variable-length code is uniquely decodable, named after August Albert Sar
Hamming code
In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection
Signed-digit representation
In mathematical notation for numbers, a signed-digit representation is a positional numeral system with a set of signed digits used to encode the integers. Signed-digit representation can be used to a
Berlekamp–Welch algorithm
The Berlekamp–Welch algorithm, also known as the Welch–Berlekamp algorithm, is named for Elwyn R. Berlekamp and Lloyd R. Welch. This is a decoder algorithm that efficiently corrects errors in Reed–Sol
Singleton bound
In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It
Burst error-correcting code
In coding theory, burst error-correcting codes employ methods of correcting burst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other
Preparata code
In coding theory, the Preparata codes form a class of non-linear double-error-correcting codes. They are named after Franco P. Preparata who first described them in 1968. Although non-linear over GF(2
Canonical Huffman code
In computer science and information theory, a canonical Huffman code is a particular type of Huffman code with unique properties which allow it to be described in a very compact manner. Data compresso
Berger code
In telecommunication, a Berger code is a unidirectional error detecting code, named after its inventor, J. M. Berger. Berger codes can detect all unidirectional errors. Unidirectional errors are error
Recursive indexing
Recursive indexing is an algorithm used to represent large numberic values using members of a relatively small set. Recursive indexing writes the successive differences of the number after extracting
Reed–Muller code
Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely r
Shannon's source coding theorem
In information theory, Shannon's source coding theorem (or noiseless coding theorem) establishes the limits to possible data compression, and the operational meaning of the Shannon entropy. Named afte
Standard array
In coding theory, a standard array (or Slepian array) is a by array that lists all elements of a particular vector space. Standard arrays are used to decode linear codes; i.e. to find the correspondin
Justesen code
In coding theory, Justesen codes form a class of error-correcting codes that have a constant rate, constant relative distance, and a constant alphabet size. Before the Justesen error correction code w
Blackwell channel
The Blackwell channel is a deterministic broadcast channel model used in coding theory and information theory. It was first proposed by mathematician David Blackwell. In this model, a transmitter tran
Generalized minimum-distance decoding
In coding theory, generalized minimum-distance (GMD) decoding provides an efficient algorithm for decoding concatenated codes, which is based on using an errors-and-erasures decoder for the outer code
Luby transform code
In computer science, Luby transform codes (LT codes) are the first class of practical fountain codes that are near-optimal erasure correcting codes. They were invented by Michael Luby in 1998 and publ
Hamming scheme
The Hamming scheme, named after Richard Hamming, is also known as the hyper-cubic association scheme, and it is the most important example for coding theory. In this scheme the set of binary vectors o
Zigzag code
In coding theory, a zigzag code is a type of linear error-correcting code introduced by . They are defined by partitioning the input data into segments of fixed size, and adding sequence of check bits
Gray isometry
No description available.
Raptor code
In computer science, Raptor codes (rapid tornado; see Tornado codes) are the first known class of fountain codes with linear time encoding and decoding. They were invented by Amin Shokrollahi in 2000/
Z-channel (information theory)
In coding theory and information theory, a Z-channel (binary asymmetric channel) is a communications channel used to model the behaviour of some data storage systems.
Convolutional sparse coding
The convolutional sparse coding paradigm is an extension of the global sparse coding model, in which a redundant dictionary is modeled as a concatenation of circulant matrices. While the global sparsi
Coding gain
In coding theory and related engineering problems, coding gain is the measure in the difference between the signal-to-noise ratio (SNR) levels between the uncoded system and coded system required to r
Coding theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data tr
Noisy-channel coding theorem
In information theory, the noisy-channel coding theorem (sometimes Shannon's theorem or Shannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is
Parvaresh–Vardy code
Parvaresh–Vardy codes are a family of error-correcting codes first described in 2005 by Farzad Parvaresh and Alexander Vardy. They can be used for efficient list-decoding.
Hexacode
In coding theory, the hexacode is a length 6 linear code of dimension 3 over the Galois field of 4 elements defined by It is a 3-dimensional subspace of the vector space of dimension 6 over .Then cont
Even code
A binary code is called an even code if the Hamming weight of each of its codewords is even. An even code should have a generator polynomial that include (1+x) minimal polynomial as a product. Further
Comma code
A comma code is a type of prefix-free code in which a comma, a particular symbol or sequence of symbols, occurs at the end of a code word and never occurs otherwise. This is an intuitive way to expres
Hamming(7,4)
In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the
Parity-check matrix
In coding theory, a parity-check matrix of a linear block code C is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a par
Coset leader
In coding theory, a coset leader is a word of minimum weight in any particular coset - that is, a word with the lowest amount of non-zero entries. Sometimes there are several words of equal minimum we
Generator matrix
In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the
Lee distance
In coding theory, the Lee distance is a distance between two strings and of equal length n over the q-ary alphabet {0, 1, …, q − 1} of size q ≥ 2. It is a metric defined as If q = 2 or q = 3 the Lee d
Tanner graph
In coding theory, a Tanner graph, named after Michael Tanner, is a bipartite graph used to state constraints or equations which specify error correcting codes. In coding theory, Tanner graphs are used
Variable-length code
In coding theory a variable-length code is a code which maps source symbols to a variable number of bits. Variable-length codes can allow sources to be compressed and decompressed with zero error (los
Berlekamp switching game
The Berlekamp switching game is a mathematical game proposed by American mathematician Elwyn Berlekamp. It has also been called the Gale–Berlekamp switching game, after David Gale, who discovered the
Repetition code
In coding theory, the repetition code is one of the most basic error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea o
Goppa code
In mathematics, an algebraic geometric code (AG-code), otherwise known as a Goppa code, is a general type of linear code constructed by using an algebraic curve over a finite field . Such codes were i
Factorization of polynomials over finite fields
In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for p
Polynomial code
In coding theory, a polynomial code is a type of linear code whose set of valid code words consists of those polynomials (usually of some fixed length) that are divisible by a given fixed polynomial (
Systematic code
In coding theory, a systematic code is any error-correcting code in which the input data is embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input s
Blahut–Arimoto algorithm
The term Blahut–Arimoto algorithm is often used to refer to a class of algorithms for computing numerically either the information theoretic capacity of a channel, the rate-distortion function of a so
Bar product
In information theory, the bar product of two linear codes C2 ⊆ C1 is defined as where (a | b) denotes the concatenation of a and b. If the code words in C1 are of length n, then the code words in C1
Griesmer bound
In the mathematics of coding theory, the Griesmer bound, named after James Hugo Griesmer, is a bound on the length of linear binary codes of dimension k and minimum distance d.There is also a very sim
Soliton distribution
A soliton distribution is a type of discrete probability distribution that arises in the theory of erasure correcting codes, which use information redundancy to compensate for transmission errors mani
Distributed source coding
Distributed source coding (DSC) is an important problem in information theory and communication. DSC problems regard the compression of multiple correlated information sources that do not communicate
Reed–Solomon error correction
Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960.They have many applications, the most prominent of which include consumer te
List of algebraic coding theory topics
This is a list of algebraic coding theory topics.
Quadratic residue code
A quadratic residue code is a type of cyclic code.
Delsarte–Goethals code
The Delsarte–Goethals code is a type of error-correcting code.
Hamming bound
In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volum
Packet erasure channel
The packet erasure channel is a communication channel model where sequential packets are either received or lost (at a known location). This channel model is closely related to the binary erasure chan
Alternant code
In coding theory, alternant codes form a class of parameterised error-correcting codes which generalise the BCH codes.
Non-integer base of numeration
A non-integer representation uses non-integer numbers as the radix, or base, of a positional numeral system. For a non-integer radix β > 1, the value of is The numbers di are non-negative integers les
Tornado code
In coding theory, Tornado codes are a class of erasure codes that support error correction. Tornado codes require a constant C more redundant blocks than the more data-efficient Reed–Solomon erasure c
Enumerator polynomial
In coding theory, the weight enumerator polynomial of a binary linear code specifies the number of words of each possible Hamming weight. Let be a binary linear code length . The weight distribution i
Unary numeral system
The unary numeral system is the simplest numeral system to represent natural numbers: to represent a number N, a symbol representing 1 is repeated N times. In the unary system, the number 0 (zero) is
Ternary Golay code
In coding theory, the ternary Golay codes are two closely related error-correcting codes.The code generally known simply as the ternary Golay code is an -code, that is, it is a linear code over a tern
List decoding
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding
Comma-free code
A comma-free code is block code in which no concatenation of two code words contains a valid code word that overlaps both. Comma-free codes are also known as self-synchronizing block codes because no
Hadamard code
The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, t
Gilbert–Varshamov bound for linear codes
The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound, which gives a lower bound on the maximal number of elements in an error-correcting code of a given block
Low-density parity-check code
In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using
Folded Reed–Solomon code
In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols. Folded Ree
Block code
In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks.There is a vast number of examples for block codes, many of which have a wide range
Belief propagation
Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calcu
Binary symmetric channel
A binary symmetric channel (or BSCp) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the
Long code (mathematics)
In theoretical computer science and coding theory, the long code is an error-correcting code that is locally decodable. Long codes have an extremely poor rate, but play a fundamental role in the theor
Binary Goppa code
In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary
Hamming space
In statistics and coding theory, a Hamming space (named after American mathematician Richard Hamming) is usually the set of all binary strings of length N. It is used in the theory of coding signals a
Zemor's decoding algorithm
In coding theory, Zemor's algorithm, designed and developed by Gilles Zemor, is a recursive low-complexity approach to code construction. It is an improvement over the algorithm of Sipser and Spielman
Computationally bounded adversary
In information theory, the computationally bounded adversary problem is a different way of looking at the problem of sending data over a noisy channel. In previous models the best that could be done w
Noisy text
Noisy text is text with differences between the surface form of a coded representation of the text and the intended, correct, or original text. The noise may be due to typographic errors or colloquial
Prefix code
A prefix code is a type of code system distinguished by its possession of the "prefix property", which requires that there is no whole code word in the system that is a prefix (initial segment) of any
Covering code
In coding theory, a covering code is a set of elements (called codewords) in a space, with the property that every element of the space is within a fixed distance of some codeword.
Hamming weight
The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same
Zyablov bound
In coding theory, the Zyablov bound is a lower bound on the rate and relative distance that are achievable by concatenated codes.
Spherical code
In geometry and coding theory, a spherical code with parameters (n,N,t) is a set of N points on the unit hypersphere in n dimensions for which the dot product of unit vectors from the origin to any tw
Johnson bound
In applied mathematics, the Johnson bound (named after Selmer Martin Johnson) is a limit on the size of error-correcting codes, as used in coding theory for data transmission or communications.
Deletion channel
A deletion channel is a communications channel model used in coding theory and information theory. In this model, a transmitter sends a bit (a zero or a one), and the receiver either receives the bit
Elias Bassalygo bound
The Elias Bassalygo bound is a mathematical limit used in coding theory for error correction during data transmission or communications.
Linear network coding
In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations. Linear network coding may be used
Decoding methods
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to
Erasure code
In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of k symbols into a longer message
Homomorphic signatures for network coding
Network coding has been shown to optimally use bandwidth in a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network.
Concatenated error correction code
In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to t
Group code
In coding theory, group codes are a type of code. Group codes consist of linear block codes which are subgroups of , where is a finite Abelian group. A systematic group code is a code over of order de
Binary erasure channel
In coding theory and information theory, a binary erasure channel (BEC) is a communications channel model. A transmitter sends a bit (a zero or a one), and the receiver either receives the bit correct
Guruswami–Sudan list decoding algorithm
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes in the presence of many errors. If a code has relative distance , then it is possible in principle to rec
Srivastava code
In coding theory, Srivastava codes, formulated by Professor J. N. Srivastava, form a class of parameterised error-correcting codes which are a special case of alternant codes.
Fuzzy extractor
Fuzzy extractors are a method that allows biometric data to be used as inputs to standard cryptographic techniques, to enhance computer security. "Fuzzy", in this context, refers to the fact that the
First Johnson bound
No description available.
Gilbert–Varshamov bound
In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov) is a limit on the parameters of a (not necessarily linear) code. It is occasionally known as the Gi
Barker code
In telecommunication technology, a Barker code, or Barker sequence, is a finite sequence of digital values with the ideal autocorrelation property. It is used as a synchronising pattern between sender
Second Johnson bound
No description available.
Fountain code
In coding theory, fountain codes (also known as rateless erasure codes) are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a
Dual code
In coding theory, the dual code of a linear code is the linear code defined by where is a scalar product. In linear algebra terms, the dual code is the annihilator of C with respect to the bilinear fo
Unary coding
Unary coding, or the unary numeral system and also sometimes called thermometer code, is an entropy encoding that represents a natural number, n, with a code of length n + 1 ( or n ), usually n ones f
Grammar-based code
Grammar-based codes or Grammar-based compression are compression algorithms based on the idea of constructing a context-free grammar (CFG) for the string to be compressed. Examples include universal l
Triangular network coding
In coding theory, triangular network coding (TNC) is a network coding based packet coding scheme introduced by .Previously, packet coding for network coding was done using linear network coding (LNC).
Expander code
In coding theory, expander codes form a class of error-correcting codes that are constructed from bipartite expander graphs.Along with Justesen codes, expander codes are of particular interest since t