Useful Links
Computer Science
Artificial Intelligence
Deep Learning
Graph Neural Networks
1. Foundations for Understanding GNNs
2. Core Concepts of Graph Neural Networks
3. Foundational GNN Architectures
4. Advanced GNN Architectures
5. GNN Training and Optimization
6. GNN Applications and Tasks
7. Evaluation and Benchmarking
8. Advanced Topics and Research Frontiers
9. Implementation and Practical Considerations
GNN Training and Optimization
Training Paradigms
Supervised Learning
Labeled Data Requirements
Task-specific Objectives
Semi-supervised Learning
Limited Label Scenarios
Label Propagation
Unsupervised Learning
Reconstruction Objectives
Contrastive Learning
Self-supervised Learning
Pretext Tasks
Downstream Fine-tuning
Loss Functions
Node-level Losses
Cross-entropy Loss
Focal Loss
Margin-based Loss
Edge-level Losses
Link Prediction Loss
Ranking Loss
Contrastive Loss
Graph-level Losses
Classification Loss
Regression Loss
Reconstruction Loss
Regularization Terms
L1 and L2 Regularization
Graph Regularization
Spectral Regularization
Optimization Challenges
Gradient Flow in Deep GNNs
Oversmoothing Problem
Causes and Analysis
Mitigation Strategies
Residual Connections
Dense Connections
Jumping Knowledge Networks
Oversquashing Problem
Information Bottlenecks
Long-range Dependencies
Vanishing Gradient Problem
Deep Network Training
Gradient Clipping
Scalability Solutions
Sampling-based Training
Node Sampling
Uniform Sampling
Importance Sampling
Layer-wise Sampling
Neighbor Sampling
Control Variate Methods
Subgraph Sampling
GraphSAINT
Cluster-GCN
Distributed Training
Data Parallelism
Model Parallelism
Communication Optimization
Memory Optimization
Gradient Checkpointing
Mixed Precision Training
Model Compression
Previous
4. Advanced GNN Architectures
Go to top
Next
6. GNN Applications and Tasks