Useful Links
Computer Science
Artificial Intelligence
Natural Language Processing (NLP)
Natural Language Processing (NLP)
1. Introduction to Natural Language Processing
2. Linguistic Foundations
3. Text Processing and Preprocessing
4. Language Modeling
5. Feature Representation
6. Word Embeddings and Distributed Representations
7. Classical Machine Learning for NLP
8. Deep Learning Foundations
9. Recurrent Neural Networks
10. Attention Mechanisms and Transformers
11. Pre-trained Language Models
12. Core NLP Applications
13. Advanced Topics
14. Evaluation and Benchmarking
15. Ethics and Responsible AI
Deep Learning Foundations
Neural Network Basics
Perceptrons and Multi-layer Networks
Activation Functions
Sigmoid and Tanh
ReLU and Variants
Softmax
Loss Functions
Cross-Entropy Loss
Mean Squared Error
Hinge Loss
Optimization
Gradient Descent Variants
Adam and AdaGrad
Learning Rate Scheduling
Regularization Techniques
Dropout
Batch Normalization
Layer Normalization
Weight Decay
Training Strategies
Backpropagation
Mini-batch Training
Early Stopping
Curriculum Learning
Previous
7. Classical Machine Learning for NLP
Go to top
Next
9. Recurrent Neural Networks