Useful Links
Statistics
Machine Learning
1. Introduction to Machine Learning
2. Mathematical and Statistical Foundations
3. Data Preprocessing and Feature Engineering
4. Supervised Learning
5. Unsupervised Learning
6. Model Evaluation and Validation
7. Ensemble Methods and Advanced Techniques
8. Deep Learning and Neural Networks
9. Reinforcement Learning
10. Advanced Topics and Specialized Areas
11. Machine Learning Operations and Deployment
Unsupervised Learning
Clustering Algorithms
K-Means Clustering
Algorithm Steps
Initialization
Assignment Step
Update Step
Convergence
Centroid Initialization Methods
Random Initialization
K-Means++
Forgy Method
Determining Optimal k
Elbow Method
Silhouette Analysis
Gap Statistic
Information Criteria
Variants of K-Means
Mini-Batch K-Means
Fuzzy C-Means
K-Medoids
Limitations and Assumptions
Spherical Clusters
Similar Cluster Sizes
Sensitivity to Initialization
Hierarchical Clustering
Agglomerative Clustering
Bottom-Up Approach
Linkage Criteria
Single Linkage
Complete Linkage
Average Linkage
Ward Linkage
Algorithm Steps
Divisive Clustering
Top-Down Approach
Splitting Strategies
Dendrograms
Construction
Interpretation
Cutting Trees
Distance Metrics
Euclidean Distance
Manhattan Distance
Cosine Similarity
Determining Number of Clusters
Dendrogram Analysis
Inconsistency Coefficient
Density-Based Clustering
DBSCAN
Core Points
Border Points
Noise Points
Density Reachability
Parameter Selection
Epsilon
MinPts
Advantages and Limitations
OPTICS
Ordering Points
Reachability Distance
Core Distance
Mean Shift
Kernel Density Estimation
Mode Seeking
Model-Based Clustering
Gaussian Mixture Models
Mixture Components
EM Algorithm
Model Selection
Expectation-Maximization
E-Step
M-Step
Convergence
Clustering Evaluation
Internal Measures
Silhouette Score
Calinski-Harabasz Index
Davies-Bouldin Index
External Measures
Adjusted Rand Index
Normalized Mutual Information
Fowlkes-Mallows Index
Dimensionality Reduction
Principal Component Analysis
Covariance Matrix
Eigenvalue Decomposition
Principal Components
First Principal Component
Subsequent Components
Orthogonality
Explained Variance
Variance Ratio
Cumulative Variance
Scree Plot
Dimensionality Selection
Kaiser Criterion
Cumulative Variance Threshold
Cross-Validation
PCA Applications
Data Compression
Noise Reduction
Visualization
Limitations
Linear Assumptions
Interpretability
Factor Analysis
Latent Factors
Factor Loadings
Communalities
Uniqueness
Rotation Methods
Varimax
Promax
Independent Component Analysis
Statistical Independence
Non-Gaussian Sources
FastICA Algorithm
Applications
Signal Separation
Feature Extraction
Non-Linear Dimensionality Reduction
t-SNE
Stochastic Neighbor Embedding
t-Distribution
Perplexity Parameter
Optimization Process
Visualization Applications
UMAP
Uniform Manifold Approximation
Topological Structure
Hyperparameters
Manifold Learning
Locally Linear Embedding
Isomap
Laplacian Eigenmaps
Autoencoders for Dimensionality Reduction
Encoder-Decoder Architecture
Bottleneck Layer
Reconstruction Loss
Variational Autoencoders
Association Rule Learning
Market Basket Analysis
Transactions and Items
Itemsets
Association Rules
Apriori Algorithm
Frequent Itemset Generation
Candidate Generation
Support Counting
Rule Generation
Computational Complexity
FP-Growth Algorithm
FP-Tree Construction
Conditional Pattern Bases
Efficiency Improvements
Evaluation Metrics
Support
Interpretation
Confidence
Interpretation
Lift
Interpretation
Conviction
Interpretation
Kulczynski Measure
Cosine Measure
Applications
Recommendation Systems
Cross-Selling
Inventory Management
Previous
4. Supervised Learning
Go to top
Next
6. Model Evaluation and Validation