This repo is my attempt to turn all the major ML concepts into clean visualizations.
Instead of long theory notes, every topic has a plot or interactive example so the idea sticks faster.
The notebook walks through ML in the same order most people learn it, starting from regression and moving all the way to deep learning basics.
- Simple Linear Regression
- Multiple Linear Regression (pair plot)
- Polynomial Regression
- Loss Function Surface (MSE)
- Linearity check
- Multicollinearity
- Normality of residuals
- Homoscedasticity
- Coefficient shrinkage (L1 / L2)
- Bias–Variance Tradeoff
- Basic time series plot
- Moving average smoothing
- Time series decomposition
- Stationarity
- Sigmoid curve
- Decision boundaries for kNN, Logistic Regression, SVM, etc.
- ROC Curve and AUC
- k-Means step-by-step
- Elbow method
- DBSCAN clusters
- Silhouette score plot
- PCA intuition (before/after projection)
- PCA explained variance plot
- t-SNE vs PCA visual comparison
- Bagging vs Boosting visual
- Random Forest decision boundary
- Gradient Boosting tree add-on effect
- Perceptron decision line
- Activation functions comparison
- Feed-forward network flow
- Loss landscape for neural nets
- Bag of Words vs TF-IDF
- Word embeddings in 2D
- Cosine similarity point plot
- Overfitting vs underfitting
- Train-val-test split visual
- Learning rate effect
- Confusion matrix visual form
Just open the notebook: Machine_Learning_Concepts.ipynb
Run it top to bottom.
Each section has markdown + code + visualization right after it.
No external dataset needed, everything is generated inside.
Make ML feel less abstract.
If a plot explains the idea faster than text, that’s what I pick.
- Add interactive widgets (Sliders for K, learning rate, etc.)
- Add deep learning training visual (loss curve + accuracy side-by-side)
- Maybe convert the whole thing into a small web app later
If you have suggestions or want to add visuals for missing concepts, feel free to open an issue or PR.