- Algorithms
- >
- Data mining algorithms
- >
- Classification algorithms
- >
- Artificial neural networks

- Applied mathematics
- >
- Algorithms
- >
- Computational statistics
- >
- Artificial neural networks

- Applied mathematics
- >
- Computational mathematics
- >
- Computational statistics
- >
- Artificial neural networks

- Data mining
- >
- Data mining algorithms
- >
- Classification algorithms
- >
- Artificial neural networks

- Fields of mathematics
- >
- Applied mathematics
- >
- Mathematical modeling
- >
- Artificial neural networks

- Fields of mathematics
- >
- Applied mathematics
- >
- Mathematical psychology
- >
- Artificial neural networks

- Fields of mathematics
- >
- Computational mathematics
- >
- Computational statistics
- >
- Artificial neural networks

- Mathematical logic
- >
- Algorithms
- >
- Computational statistics
- >
- Artificial neural networks

- Multivariate statistics
- >
- Statistical classification
- >
- Classification algorithms
- >
- Artificial neural networks

- Probability and statistics
- >
- Statistics
- >
- Computational statistics
- >
- Artificial neural networks

- Statistical data types
- >
- Categorical data
- >
- Classification algorithms
- >
- Artificial neural networks

- Statistical data types
- >
- Statistical classification
- >
- Classification algorithms
- >
- Artificial neural networks

- Theoretical computer science
- >
- Algorithms
- >
- Computational statistics
- >
- Artificial neural networks

Generative topographic map

Generative topographic map (GTM) is a machine learning method that is a probabilistic counterpart of the self-organizing map (SOM), is probably convergent and does not require a shrinking neighborhood

Random neural network

The random neural network (RNN) is a mathematical representation of an interconnected network of neurons or cells which exchange spiking signals. It was invented by Erol Gelenbe and is linked to the G

Oja's rule

Oja's learning rule, or simply Oja's rule, named after Finnish computer scientist Erkki Oja, is a model of how neurons in the brain or in artificial neural networks change connection strength, or lear

Deep learning

Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervi

Oscillatory neural network

An oscillatory neural network (ONN) is an artificial neural network that uses coupled oscillators as neurons. Oscillatory neural networks are closely linked to the Kuramoto model, and are inspired by

Teacher forcing

Teacher forcing is an algorithm for training the weights of recurrent neural networks (RNNs). It involves feeding observed sequence values (i.e. ground-truth samples) back into the RNN after each step

AlterEgo

AlterEgo is a wearable silent speech output-input device developed by MIT Media Lab. The device is attached around the head, neck, and jawline and translates your brain speech center impulse input int

Types of artificial neural networks

There are many types of artificial neural networks (ANN). Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are gen

Large memory storage and retrieval neural network

A large memory storage and retrieval neural network (LAMSTAR) is a fast deep learning neural network of many layers that can use many filters simultaneously. These filters may be nonlinear, stochastic

CoDi

CoDi is a cellular automaton (CA) model for spiking neural networks (SNNs). CoDi is an acronym for Collect and Distribute, referring to the signals and spikes in a neural network. CoDi uses a von Neum

Evolutionary acquisition of neural topologies

Evolutionary acquisition of neural topologies (EANT/EANT2) is an evolutionary reinforcement learning method that evolves both the topology and weights of artificial neural networks. It is closely rela

Neocognitron

The neocognitron is a hierarchical, multilayered artificial neural network proposed by Kunihiko Fukushima in 1979. It has been used for Japanese handwritten character recognition and other pattern rec

Radial basis function

A radial basis function (RBF) is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that , or some other fixed point , called

Neural network Gaussian process

Bayesian networks are a modeling tool for assigning probabilities to events, and thereby characterizing the uncertainty in a model's predictions. Deep learning and artificial neural networks are appro

Vanishing gradient problem

In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, during each itera

Spiking neural network

Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into the

LipNet

LipNet is a deep neural network for visual speech recognition. It was created by Yannis Assael, , and Nando de Freitas, researchers from the University of Oxford. The technique, outlined in a paper in

Hierarchical temporal memory

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra

Self-organizing map

A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher

Autoassociative memory

Autoassociative memory, also known as auto-association memory or an autoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are

Neural Networks (journal)

Neural Networks is a monthly, peer-reviewed, scientific journal and an official journal of the International Neural Network Society, European Neural Network Society, and Japanese Neural Network Societ

Adaptive resonance theory

Adaptive resonance theory (ART) is a theory developed by Stephen Grossberg and Gail Carpenter on aspects of how the brain processes information. It describes a number of neural network models which us

LeNet

LeNet is a convolutional neural network structure proposed by LeCun et al. in 1998, . In general, LeNet refers to LeNet-5 and is a simple convolutional neural network. Convolutional neural networks ar

Neural cryptography

Neural cryptography is a branch of cryptography dedicated to analyzing the application of stochastic algorithms, especially artificial neural network algorithms, for use in encryption and cryptanalysi

Convolutional deep belief network

In computer science, a convolutional deep belief network (CDBN) is a type of deep artificial neural network composed of multiple layers of convolutional restricted Boltzmann machines stacked together.

Swish function

The swish function is a mathematical function defined as follows: where β is either constant or a trainable parameter depending on the model. For β = 1, the function becomes equivalent to the Sigmoid

Helmholtz machine

The Helmholtz machine (named after Hermann von Helmholtz and his concept of Helmholtz free energy) is a type of artificial neural network that can account for the hidden structure of a set of data by

DexNet

Dex-net is a robotic manipulator. It uses a Grasp Quality Convolutional Neural Network to learn how to grasp unusually shaped objects.

Echo state network

An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights o

Neural network synchronization protocol

The Neural network synchronization protocol, abbreviated as NNSP, is built on the application-level layer of the OSI upon TCP/IP. Aiming at secure communication, this protocol's design make use of a c

Feed forward (control)

A feed forward (sometimes written feedforward) is an element or pathway within a control system that passes a controlling signal from a source in its external environment to a load elsewhere in its ex

Multimodal learning

Multimodal learning attempts to model the combination of different modalities of data, often arising in real-world applications. An example of multi-modal data is data that combines text (typically re

HyperNEAT

Hypercube-based NEAT, or HyperNEAT, is a generative encoding that evolves artificial neural networks (ANNs) with the principles of the widely used NeuroEvolution of Augmented Topologies (NEAT) algorit

Instantaneously trained neural networks

Instantaneously trained neural networks are feedforward artificial neural networks that create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out n

Recursive neural network

A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input

Cerebellar model articulation controller

The cerebellar model arithmetic computer (CMAC) is a type of neural network based on a model of the mammalian cerebellum. It is also known as the cerebellar model articulation controller. It is a type

Softmax function

The softmax function, also known as softargmax or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of

Optical neural network

An optical neural network is a physical implementation of an artificial neural network with optical components. Early optical neural networks used a photorefractive Volume hologram to interconnect arr

Promoter based genetic algorithm

The promoter based genetic algorithm (PBGA) is a genetic algorithm for neuroevolution developed by F. Bellas and R.J. Duro in the (GII) at the University of Coruña, in Spain. It evolves variable size

Quantum neural network

Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 19

Synaptic weight

In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuro

Physical neural network

A physical neural network is a type of artificial neural network in which an electrically adjustable material is used to emulate the function of a neural synapse or a higher-order (dendritic) neuron m

Word embedding

In natural language processing (NLP), word embedding is a term used for the representation of words for text analysis, typically in the form of a real-valued vector that encodes the meaning of the wor

Rprop

Rprop, short for resilient backpropagation, is a learning heuristic for supervised learning in feedforward artificial neural networks. This is a first-order optimization algorithm. This algorithm was

Infomax

Infomax is an optimization principle for artificial neural networks and other information processing systems. It prescribes that a function that maps a set of input values I to a set of output values

Seq2seq

Seq2seq is a family of machine learning approaches used for natural language processing. Applications include language translation, image captioning, conversational models and text summarization.

Electricity price forecasting

Electricity price forecasting (EPF) is a branch of energy forecasting which focuses on predicting the spot and forward prices in wholesale electricity markets. Over the last 15 years electricity price

Rectifier (neural networks)

In the context of artificial neural networks, the rectifier or ReLU (rectified linear unit) activation function is an activation function defined as the positive part of its argument: where x is the i

Neuroevolution

Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly appl

Pulse-coupled networks

Pulse-coupled networks or pulse-coupled neural networks (PCNNs) are neural models proposed by modeling a cat's visual cortex, and developed for high-performance biomimetic image processing. In 1989, E

Interactive activation and competition networks

Interactive activation and competition (IAC) networks are artificial neural networks used to model memory and intuitive generalizations. They are made up of nodes or artificial neurons which are array

IPO underpricing algorithm

IPO underpricing is the increase in stock value from the initial offering price to the first-day closing price. Many believe that underpriced IPOs leave money on the table for corporations, but some b

Multi-surface method

The multi-surface method (MSM) is a form of decision making using the concept of piecewise-linear separability of datasets to categorize data.

Dilution (neural networks)

Dilution and dropout (also called DropConnect) are regularization techniques for reducing overfitting in artificial neural networks by preventing complex co-adaptations on training data. They are an e

Grossberg network

Grossberg network is an artificial neural network introduced by Stephen Grossberg. It is a self organizing, competitive network based on continuous time. Grossberg, a neuroscientist and a biomedical e

Liquid state machine

A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time vary

Group method of data handling

Group method of data handling (GMDH) is a family of inductive algorithms for computer-based mathematical modeling of multi-parametric datasets that features fully automatic structural and parametric o

Hyper basis function network

In machine learning, a Hyper basis function network, or HyperBF network, is a generalization of radial basis function (RBF) networks concept, where the Mahalanobis-like distance is used instead of Euc

ALOPEX

ALOPEX (an acronym from "ALgorithms Of Pattern EXtraction") is a correlation based machine learning algorithm first proposed by Tzanakou and Harth in 1974.

NETtalk (artificial neural network)

NETtalk is an artificial neural network. It is the result of research carried out in the mid-1980s by Terrence Sejnowski and Charles Rosenberg. The intent behind NETtalk was to construct simplified mo

NVDLA

The NVIDIA Deep Learning Accelerator (NVDLA) is an open-source hardware neural network AI accelerator created by Nvidia. The accelerator is written in Verilog and is configurable and scalable to meet

Word2vec

Word2vec is a technique for natural language processing (NLP) published in 2013. The word2vec algorithm uses a neural network model to learn word associations from a large corpus of text. Once trained

WaveNet

WaveNet is a deep neural network for generating raw audio. It was created by researchers at London-based AI firm DeepMind. The technique, outlined in a paper in September 2016, is able to generate rel

Sigmoid function

A sigmoid function is a mathematical function having a characteristic "S"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and d

Committee machine

A committee machine is a type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response. The com

Bidirectional associative memory

Bidirectional associative memory (BAM) is a type of recurrent neural network. BAM was introduced by Bart Kosko in 1988. There are two types of associative memory, auto-associative and hetero-associati

Deep lambertian networks

Deep Lambertian Networks (DLN) is a combination of Deep belief network and Lambertian reflectance assumption which deals with the challenges posed by illumination variation in visual perception. Lambe

Linde–Buzo–Gray algorithm

The Linde–Buzo–Gray algorithm (introduced by Yoseph Linde, Andrés Buzo and Robert M. Gray in 1980) is a vector quantization algorithm to derive a good codebook. It is similar to the k-means method in

Computational neurogenetic modeling

Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between ge

Ni1000

The Ni1000 is an artificial neural network chip developed by and Intel. It is Intel's second-generation neural network chip but first all digital. The chip is aimed at image analysis applications, con

Cellular neural network

In computer science and machine learning, cellular neural networks (CNN) or cellular nonlinear networks (CNN) are a parallel computing paradigm similar to neural networks, with the difference that com

Artisto

Artisto is a video processing application with art and movie effects filters based on neural network algorithms created in 2016 by Mail.ru Group machine learning specialists. At the moment the applica

Backpropagation

In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural

Quickprop

Quickprop is an iterative method for determining the minimum of the loss function of an artificial neural network, following an algorithm inspired by the Newton's method. Sometimes, the algorithm is c

Neural network software

Neural network software is used to simulate, research, develop, and apply artificial neural networks, software concepts adapted from biological neural networks, and in some cases, a wider array of ada

Universal approximation theorem

In the mathematical theory of artificial neural networks, universal approximation theorems are results that establish the density of an algorithmically generated class of functions within a given func

Generalized Hebbian algorithm

The generalized Hebbian algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal

Artificial neuron

An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network. The artificial ne

Leabra

Leabra stands for local, error-driven and associative, biologically realistic algorithm. It is a model of learning which is a balance between Hebbian and error-driven learning with other network-deriv

Outstar

Outstar is an output from the neurodes of the hidden layer of the neural network architecture which works as an input for output layer. Neurode of hidden layer provides input to neurode of the output

BCPNN

A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural

Large width limits of neural networks

Artificial neural networks are a class of models used in machine learning, and inspired by biological neural networks. They are the core component of modern deep learning algorithms. Computation in ar

Backpropagation through time

Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently deriv

Growing self-organizing map

A growing self-organizing map (GSOM) is a growing variant of a self-organizing map (SOM). The GSOM was developed to address the issue of identifying a suitable map size in the SOM. It starts with a mi

Tensor product network

A tensor product network, in artificial neural networks, is a network that exploits the properties of tensors to model associative concepts such as variable assignment. Orthonormal vectors are chosen

Winner-take-all (computing)

Winner-take-all is a computational principle applied in computational models of neural networks by which neurons compete with each other for activation. In the classical form, only the neuron with the

Ablation (artificial intelligence)

In artificial intelligence (AI), particularly machine learning (ML), ablation is the removal of a component of an AI system. An ablation study investigates the performance of an AI system by removing

The Emotion Machine

The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind is a 2006 book by cognitive scientist Marvin Minsky that elaborates and expands on Minsky's ideas a

Synaptic transistor

A synaptic transistor is an electrical device that can learn in ways similar to a neural synapse. It optimizes its own properties for the functions it has carried out in the past. The device mimics th

Semantic neural network

Semantic neural network (SNN) is based on John von Neumann's neural network [von Neumann, 1966] and Nikolai Amosov M-Network. There are limitations to a link topology for the von Neumann’s network but

European Neural Network Society

The European Neural Network Society (ENNS) is an association of scientists, engineers, students, and others seeking to learn about and advance understanding of artificial neural networks. Specific are

Delta rule

In machine learning, the delta rule is a gradient descent learning rule for updating the weights of the inputs to artificial neurons in a single-layer neural network. It is a special case of the more

Connectionist temporal classification

Connectionist temporal classification (CTC) is a type of neural network output and associated scoring function, for training recurrent neural networks (RNNs) such as LSTM networks to tackle sequence p

Catastrophic interference

Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to abruptly and drastically forget previously learned information upon learning new in

Google Neural Machine Translation

Google Neural Machine Translation (GNMT) is a neural machine translation (NMT) system developed by Google and introduced in November 2016, that uses an artificial neural network to increase fluency an

Jpred

Jpred v.4 is the latest version of the JPred Protein Secondary Structure Prediction Server which provides predictions by the JNet algorithm, one of the most accurate methods for secondary structure pr

Extension neural network

Extension neural network is a pattern recognition method found by M. H. Wang and C. P. Hung in 2003 to classify instances of data sets. Extension neural network is composed of artificial neural networ

Pruning (artificial neural network)

In the context of artificial neural network, pruning is the practice of removing parameters (which may entail removing individual parameters, or parameters in groups such as by neurons) from an existi

Contrastive Hebbian learning

Contrastive Hebbian learning is a biologically plausible form of Hebbian learning. It is based on the contrastive divergence algorithm, which has been used to train a variety of energy-based latent va

Artificial neural network

Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is

SqueezeNet

SqueezeNet is the name of a deep neural network for computer vision that was released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford Un

Residual neural network

A residual neural network (ResNet) is an artificial neural network (ANN). It is a gateless or open-gated variant of the HighwayNet, the first working very deep feedforward neural network with hundreds

Hybrid neural network

The term hybrid neural network can have two meanings: 1.
* Biological neural networks interacting with artificial neuronal models, and 2.
* Artificial neural networks with a symbolic part (or, conve

Reservoir computing

Reservoir computing is a framework for computation derived from recurrent neural network theory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, no

Computational cybernetics

Computational cybernetics is the integration of cybernetics and computational intelligence techniques. Though the term Cybernetics entered the technical lexicon in the 1940s and 1950s, it was first us

Extreme learning machine

Extreme learning machines are feedforward neural networks for classification, regression, clustering, sparse approximation, compression and feature learning with a single layer or multiple layers of h

Compositional pattern-producing network

Compositional pattern-producing networks (CPPNs) are a variation of artificial neural networks (ANNs) that have an architecture whose evolution is guided by genetic algorithms. While ANNs often contai

Relation network

A relation network (RN) is an artificial neural network component with a structure that can reason about relations among objects. An example category of such relations is spatial relations (above, bel

U-matrix

The U-matrix (unified distance matrix) is a representation of a self-organizing map (SOM) where the Euclidean distance between the codebook vectors of neighboring neurons is depicted in a grayscale im

DeepNude

No description available.

Learning rule

An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is appli

Neural gas

Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten. The neural gas is a simple algorithm for finding optimal

Time aware long short-term memory

Time Aware LSTM (T-LSTM) is a long short-term memory (LSTM) unit capable of handling irregular time intervals in longitudinal patient records. T-LSTM was developed by researchers from Michigan State U

Triplet loss

Triplet loss is a loss function for machine learning algorithms where a reference input (called anchor) is compared to a matching input (called positive) and a non-matching input (called negative). Th

Cover's theorem

Cover's theorem is a statement in computational learning theory and is one of the primary theoretical motivations for the use of non-linear kernel methods in machine learning applications. It is so te

Confabulation (neural networks)

A confabulation, also known as a false, degraded, or corrupted memory, is a stable pattern of activation in an artificial neural network or neural assembly that does not correspond to any previously l

Artificial Intelligence System

Artificial Intelligence System (AIS) was a volunteer computing project undertaken by Intelligence Realm, Inc. with the long-term goal of simulating the human brain in real time, complete with artifici

Layer (deep learning)

A layer in a deep learning model is a structure or network topology in the model's architecture, which takes information from the previous layers and then passes it to the next layer. There are severa

Sentence embedding

Sentence embedding is the collective name for a set of techniques in natural language processing (NLP) where sentences are mapped to vectors of real numbers.

CLEVER score

The CLEVER (Cross Lipschitz Extreme Value for nEtwork Robustness) score is a way of measuring the robustness of an artificial neural network towards adversarial attacks.It was developed by a team at t

Competitive learning

Competitive learning is a form of unsupervised learning in artificial neural networks, in which nodes compete for the right to respond to a subset of the input data. A variant of Hebbian learning, com

Lernmatrix

Lernmatrix, an associative-memory-like architecture of an artificial neural network, invented around 1960 by Karl Steinbuch.

Differentiable neural computer

In artificial intelligence, a differentiable neural computer (DNC) is a memory augmented neural network architecture (MANN), which is typically (but not by definition) recurrent in its implementation.

Learning vector quantization

In computer science, learning vector quantization (LVQ) is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.

Graph neural network

A Graph neural network (GNN) is a class of artificial neural networks for processing data that can be represented as graphs. In the more general subject of "Geometric Deep Learning", certain existing

Soboleva modified hyperbolic tangent

The Soboleva modified hyperbolic tangent, also known as (parametric) Soboleva modified hyperbolic tangent activation function ([P]SMHTAF), is a special S-shaped function based on the hyperbolic tangen

Activation function

In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard integrated circuit can be seen as a digital network of acti

Memtransistor

The memtransistor is an experimental multi-terminal passive electronic component that might be used in the constrution of artificial neural networks. It is a combination of the memristor and transisto

Hybrid Kohonen self-organizing map

In artificial neural networks, a hybrid Kohonen self-organizing map is a type of self-organizing map (SOM) named for the Finnish professor Teuvo Kohonen, where the network architecture consists of an

Gated recurrent unit

Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a forget gate, but has

ADALINE

ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-layer artificial neural network and the name of the physical device that implemented this network. The network uses

Capsule neural network

A Capsule Neural Network (CapsNet) is a machine learning system that is a type of artificial neural network (ANN) that can be used to better model hierarchical relationships. The approach is an attemp

Backpropagation through structure

Backpropagation through structure (BPTS) is a gradient-based technique for training recursive neural nets (a superset of recurrent neural nets) and is extensively described in a 1996 paper written by

Early stopping

In machine learning, early stopping is a form of regularization used to avoid overfitting when training a learner with an iterative method, such as gradient descent. Such methods update the learner so

Neuroevolution of augmenting topologies

NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm (GA) for the generation of evolving artificial neural networks (a neuroevolution technique) developed by Kenneth Stanley and Risto

Stochastic neural analog reinforcement calculator

SNARC (Stochastic Neural Analog Reinforcement Calculator) is a neural net machine designed by Marvin Lee Minsky. Prompted by a letter from Minsky, George Armitage Miller gathered the funding for the p

Mathematics of artificial neural networks

An artificial neural network (ANN) combines biological principles with advanced statistics to solve problems in domains such as pattern recognition and game-play. ANNs adopt the basic model of neuron

History of artificial neural networks

The history of artificial neural networks (ANN) began with Warren McCulloch and Walter Pitts (1943) who created a computational model for neural networks based on algorithms called threshold logic. Th

Dehaene–Changeux model

The Dehaene–Changeux model (DCM), also known as the global neuronal workspace or the global cognitive workspace model is a part of Bernard Baars's "global workspace model" for consciousness. It is a c

Adaptive neuro fuzzy inference system

An adaptive neuro-fuzzy inference system or adaptive network-based fuzzy inference system (ANFIS) is a kind of artificial neural network that is based on Takagi–Sugeno fuzzy inference system. The tech

Perceptron

In machine learning, the perceptron (or McCulloch-Pitts neuron) is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an inpu

Hard sigmoid

In artificial intelligence, especially computer vision and artificial neural networks, a hard sigmoid is non-smooth function used in place of a sigmoid function. These retain the basic shape of a sigm

© 2023 Useful Links.