UsefulLinks
Biology
Neurobiology/Neuroscience
Computational Neuroscience
1. Introduction to Computational Neuroscience
2. Foundations in Neuroscience
3. Mathematical and Physical Foundations
4. Modeling Single Neurons
5. Synaptic Plasticity and Learning
6. Neural Coding
7. Modeling Neural Networks
8. Models of Learning and Memory
9. Models of Sensory and Motor Systems
10. Models of Higher Cognitive Functions
11. Tools and Techniques
6.
Neural Coding
6.1.
What is a Neural Code
6.1.1.
Definition and significance
6.1.1.1.
Information representation
6.1.1.2.
Neural computation
6.1.2.
Types of neural codes
6.1.2.1.
Rate codes
6.1.2.2.
Temporal codes
6.1.2.3.
Population codes
6.1.2.4.
Sparse codes
6.1.3.
Coding principles
6.1.3.1.
Efficiency
6.1.3.2.
Robustness
6.1.3.3.
Flexibility
6.2.
Rate Coding
6.2.1.
Firing Rate as Information Carrier
6.2.1.1.
Spike count codes
6.2.1.2.
Time window selection
6.2.1.3.
Rate estimation methods
6.2.2.
Tuning Curves
6.2.2.1.
Sensory tuning
6.2.2.1.1.
Orientation tuning
6.2.2.1.2.
Direction tuning
6.2.2.1.3.
Frequency tuning
6.2.2.2.
Motor tuning
6.2.2.2.1.
Movement direction
6.2.2.2.2.
Force coding
6.2.2.3.
Tuning curve shapes
6.2.2.3.1.
Gaussian tuning
6.2.2.3.2.
Cosine tuning
6.2.2.3.3.
Sigmoidal tuning
6.2.3.
Limitations of rate coding
6.2.3.1.
Temporal resolution
6.2.3.2.
Information loss
6.2.3.3.
Metabolic costs
6.3.
Temporal Coding
6.3.1.
Spike Timing
6.3.1.1.
Precise timing
6.3.1.2.
Timing reliability
6.3.1.3.
Temporal precision limits
6.3.2.
Synchrony
6.3.2.1.
Spike synchronization
6.3.2.2.
Correlation strength
6.3.2.3.
Synchrony detection
6.3.3.
Phase-of-Firing Codes
6.3.3.1.
Oscillatory phase
6.3.3.2.
Phase precession
6.3.3.3.
Phase coding capacity
6.3.4.
Temporal precision
6.3.4.1.
Millisecond precision
6.3.4.2.
Jitter analysis
6.3.4.3.
Information content
6.3.5.
Burst Coding
6.3.5.1.
Burst detection
6.3.5.2.
Burst information
6.3.5.3.
Tonic vs burst modes
6.4.
Population Coding
6.4.1.
Encoding by Ensembles
6.4.1.1.
Distributed representation
6.4.1.2.
Population responses
6.4.1.3.
Ensemble dynamics
6.4.2.
Population Vectors
6.4.2.1.
Vector summation
6.4.2.2.
Preferred directions
6.4.2.3.
Decoding accuracy
6.4.3.
Sparse Coding
6.4.3.1.
Sparsity measures
6.4.3.2.
Lifetime sparsity
6.4.3.3.
Population sparsity
6.4.4.
Dense Coding
6.4.4.1.
Distributed activity
6.4.4.2.
Redundant coding
6.4.4.3.
Noise tolerance
6.4.5.
Redundancy and decorrelation
6.4.5.1.
Information redundancy
6.4.5.2.
Noise correlations
6.4.5.3.
Efficient coding
6.5.
Decoding Neural Activity
6.5.1.
Spike-Triggered Average
6.5.1.1.
Reverse correlation
6.5.1.2.
Receptive field mapping
6.5.1.3.
Linear filters
6.5.2.
Bayesian Decoding
6.5.2.1.
Posterior probability
6.5.2.2.
Prior knowledge
6.5.2.3.
Likelihood functions
6.5.3.
Linear Classifiers
6.5.3.1.
Support vector machines
6.5.3.2.
Linear discriminant analysis
6.5.3.3.
Perceptron algorithm
6.5.4.
Maximum likelihood decoding
6.5.4.1.
ML estimation
6.5.4.2.
Decoding accuracy
6.5.4.3.
Confidence intervals
6.5.5.
Neural decoding applications
6.5.5.1.
Brain-machine interfaces
6.5.5.2.
Prosthetic control
6.5.5.3.
Cognitive state decoding
Previous
5. Synaptic Plasticity and Learning
Go to top
Next
7. Modeling Neural Networks