Category: Computational learning theory

Unique negative dimension
Unique negative dimension (UND) is a complexity measure for the model of .The unique negative dimension of a class of concepts is the size of the maximum subclass such that for every concept , we have
Bondy's theorem
In mathematics, Bondy's theorem is a bound on the number of elements needed to distinguish the sets in a family of sets from each other. It belongs to the field of combinatorics, and is named after Jo
Vapnik–Chervonenkis theory
Vapnik–Chervonenkis theory (also known as VC theory) was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to
Witness set
In computational learning theory, let C be a concept class over a domain X and c be a concept in C. A subset S of X is a witness set for c in C if c(S) verifies c (i.e., c is the only consistent conce
Cover's theorem
Cover's theorem is a statement in computational learning theory and is one of the primary theoretical motivations for the use of non-linear kernel methods in machine learning applications. It is so te
Concept class
In computational learning theory in mathematics, a concept over a domain X is a total Boolean function over X. A concept class is a class of concepts. Concept classes are a subject of computational le
Teaching dimension
In computational learning theory, the teaching dimension of a concept class C is defined to be , where is the minimum size of a witness set for c in C. The teaching dimension of a finite concept class
Sample exclusion dimension
In computational learning theory, sample exclusion dimensions arise in the study of exact concept learning with queries. In algorithmic learning theory, a concept over a domain X is a Boolean function
Algorithmic learning theory
Algorithmic learning theory is a mathematical framework for analyzing machine learning problems and algorithms. Synonyms include formal learning theory and algorithmic inductive inference. Algorithmic
Error tolerance (PAC learning)
In PAC learning, error tolerance refers to the ability of an algorithm to learn when the examples received have been corrupted in some way. In fact, this is a very common and important issue since in
Distribution learning theory
The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, , Dana Ron, Ronitt Rubinfeld, Robe
Representer theorem
For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer of a regularized empirical risk functional defined over a reprodu
Win–stay, lose–switch
In psychology, game theory, statistics, and machine learning, win–stay, lose–switch (also win–stay, lose–shift) is a heuristic learning strategy used to model learning in decision situations. It was f
Vapnik–Chervonenkis dimension
In Vapnik–Chervonenkis theory, the Vapnik–Chervonenkis (VC) dimension is a measure of the capacity (complexity, expressive power, richness, or flexibility) of a set of functions that can be learned by
Probably approximately correct learning
In computational learning theory, probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning. It was proposed in 1984 by Leslie Valiant. In this framewo
Language identification in the limit
Language identification in the limit is a formal model for inductive inference of formal languages, mainly by computers (see machine learning and induction of regular languages). It was introduced by
Occam learning
In computational learning theory, Occam learning is a model of algorithmic learning where the objective of the learner is to output a succinct representation of received training data. This is closely
Induction of regular languages
In computational learning theory, induction of regular languages refers to the task of learning a formal description (e.g. grammar) of a regular language from a given set of example strings. Although
Growth function
The growth function, also called the shatter coefficient or the shattering number, measures the richness of a set family. It is especially used in the context of statistical learning theory, where it
Computational learning theory
In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.
Natarajan dimension
In the theory of Probably Approximately Correct Machine Learning, the dimension characterizes the complexity of learning a set of functions, generalizing from the Vapnik-Chervonenkis dimension for boo
Shattered set
The concept of shattered sets plays an important role in Vapnik–Chervonenkis theory, also known as VC-theory. Shattering and VC-theory are used in the study of empirical processes as well as in statis