Useful Links
Statistics
Machine Learning
1. Introduction to Machine Learning
2. Mathematical and Statistical Foundations
3. Data Preprocessing and Feature Engineering
4. Supervised Learning
5. Unsupervised Learning
6. Model Evaluation and Validation
7. Ensemble Methods and Advanced Techniques
8. Deep Learning and Neural Networks
9. Reinforcement Learning
10. Advanced Topics and Specialized Areas
11. Machine Learning Operations and Deployment
Ensemble Methods and Advanced Techniques
Ensemble Learning Principles
Wisdom of Crowds
Diversity in Ensemble
Bias-Variance Decomposition for Ensembles
Ensemble Size Considerations
Bagging Methods
Bootstrap Aggregating
Bootstrap Sampling
Aggregation Strategies
Voting for Classification
Averaging for Regression
Variance Reduction
Parallel Training
Random Forests
Decision Tree Base Learners
Feature Bagging
Random Feature Selection
Feature Subset Size
Out-of-Bag Error Estimation
OOB Score Calculation
Model Evaluation Without Validation Set
Feature Importance
Gini Importance
Permutation Importance
Hyperparameter Tuning
Number of Trees
Tree Depth
Feature Subset Size
Advantages and Limitations
Extra Trees
Extremely Randomized Trees
Random Thresholds
Comparison with Random Forests
Boosting Methods
Boosting Principles
Sequential Learning
Weak Learner Combination
Adaptive Reweighting
Bias Reduction
AdaBoost
Algorithm Steps
Sample Weight Updates
Weak Learner Weights
Exponential Loss Function
Theoretical Properties
Gradient Boosting
Gradient Boosting Framework
Residual Fitting
Gradient Descent in Function Space
Learning Rate
Regularization Techniques
Gradient Boosting Machines
Tree-Based Weak Learners
Shrinkage Parameter
Subsampling
Feature Subsampling
XGBoost
Extreme Gradient Boosting
Second-Order Optimization
Regularization Terms
Parallel Processing
Missing Value Handling
Feature Importance
LightGBM
Gradient-Based One-Side Sampling
Exclusive Feature Bundling
Leaf-Wise Tree Growth
Memory Efficiency
CatBoost
Categorical Feature Handling
Ordered Boosting
Symmetric Trees
Overfitting Reduction
Handling Overfitting in Boosting
Early Stopping
Regularization Parameters
Cross-Validation Monitoring
Stacking and Blending
Stacked Generalization
Base Learners
Meta-Learner
Cross-Validation for Meta-Features
Multi-Level Stacking
Blending
Holdout Set for Meta-Learning
Comparison with Stacking
Computational Efficiency
Model Diversity
Algorithm Diversity
Data Diversity
Parameter Diversity
Meta-Learner Selection
Linear Models
Non-Linear Models
Regularization in Meta-Learning
Voting Methods
Hard Voting
Majority Vote
Plurality Vote
Tie-Breaking Strategies
Soft Voting
Probability Averaging
Weighted Voting
Confidence-Based Weighting
Weighted Voting
Performance-Based Weights
Dynamic Weight Adjustment
Hyperparameter Optimization
Hyperparameter vs. Parameter Distinction
Search Strategies
Grid Search
Exhaustive Search
Computational Cost
Curse of Dimensionality
Random Search
Random Sampling
Efficiency Advantages
Theoretical Justification
Bayesian Optimization
Gaussian Process Models
Acquisition Functions
Sequential Model-Based Optimization
Evolutionary Algorithms
Genetic Algorithms
Particle Swarm Optimization
Gradient-Based Methods
Differentiable Hyperparameters
Hypergradients
Multi-Fidelity Optimization
Successive Halving
Hyperband
BOHB
Early Stopping
Validation-Based Stopping
Patience Parameter
Restoration of Best Weights
Automated Machine Learning
AutoML Frameworks
Neural Architecture Search
Feature Engineering Automation
Model Selection Automation
Previous
6. Model Evaluation and Validation
Go to top
Next
8. Deep Learning and Neural Networks