Useful Links
Computer Science
Artificial Intelligence
Natural Language Processing (NLP)
Natural Language Processing (NLP)
1. Introduction to Natural Language Processing
2. Linguistic Foundations
3. Text Processing and Preprocessing
4. Language Modeling
5. Feature Representation
6. Word Embeddings and Distributed Representations
7. Classical Machine Learning for NLP
8. Deep Learning Foundations
9. Recurrent Neural Networks
10. Attention Mechanisms and Transformers
11. Pre-trained Language Models
12. Core NLP Applications
13. Advanced Topics
14. Evaluation and Benchmarking
15. Ethics and Responsible AI
Pre-trained Language Models
Transfer Learning Paradigm
Pre-training Objectives
Fine-tuning Strategies
Domain Adaptation
Encoder-Only Models
BERT
Masked Language Modeling
Next Sentence Prediction
Bidirectional Context
RoBERTa
Training Optimizations
Dynamic Masking
ALBERT
Parameter Sharing
Factorized Embeddings
DeBERTa
Disentangled Attention
Enhanced Mask Decoder
Decoder-Only Models
GPT Family
Autoregressive Training
Scaling Laws
In-Context Learning
LLaMA
Architectural Improvements
Efficient Training
PaLM
Pathways Architecture
Emergent Abilities
Encoder-Decoder Models
T5
Text-to-Text Framework
Unified Pre-training
BART
Denoising Autoencoder
Text Infilling
UL2
Unified Language Learner
Mixed Denoising
Model Adaptation Techniques
Full Fine-tuning
Parameter-Efficient Methods
LoRA
Adapters
Prompt Tuning
Prefix Tuning
In-Context Learning
Few-Shot Learning
Previous
10. Attention Mechanisms and Transformers
Go to top
Next
12. Core NLP Applications