Machine Learning Notes

Contents

Algorithms

  • K-means: aims to choose centroids that minimize the inertia, or within-cluster sum-of-squares criterion. Use the “elbow” method to identify the right number of means. scikit tutorial
  • KNN: Simple, flexible, naturally handles multiple classes. Slow at scale, sensitive to feature scaling and irrelevant features. scikit tutorial
  • Linear Discriminant Analysis (LDA): A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule. The model fits a Gaussian density to each class, assuming that all classes share the same covariance matrix. scikit tutorial
  • Linear regression
    • assumptions (LINE) source
      • Linearity
      • Independence of errors
      • Normality of errors
      • Equal variances
      • Tests of assumptions: i) plot each feature on x-axis vs y_error, ii) plot y_predicted on x-axis vs y_error, iii) histogram of errors.
    • Overspecified model can be used for prediction of the label, but should not be used to ascribe the effect of a feature on the label.
    • Linear algebra solutionNormal equation
  • Naive Bayes: uses naive conditional independence assumption of features. scikit
  • PCA: transform data using k vectors that minimize the perpendicular distance to points. PCA can be also thought of as an eigenvalue/engenvector decomposition. scikit. Intuition paper
  • Pearson’s correlation coefficient**. wiki. Correlation formula
  • Random Forests: each tree is built using a sample of rows (with replacement) from training set. Less prone to overfitting. scikit
  • RNN: Karpathy tutorial
  • Sorting tutorial. Ridge Regression
  • Stochastic gradient descent tutorial. Calculus solution: Stochastic gradient descent cost function
  • SVD: Singular Value Decomposition intuition with PCA use case
  • SVM: Effective in high dimensional spaces (or when number of dimensions > number of examples). SVMs do not directly provide probability estimates. scikit
  • Transformers tutorialOriginal transformer architecture

Bayes

Explainability

  • Books: Interpretable Machine Learning
  • Tutorials: twosigma: a brief survey
  • EthicalML tools EthicalML github
  • Partial dependence plots (PDP): x-axis = value of a single feature, y-axis = label. scikit
  • Individual conditional expectation (ICE): x-axis = value of a single feature, y-axis = label. scikit
  • Permutation feature importance: Randomly shuffle features and calculate impact on model metrics such as F1. scikit
  • Global surrogate: train an easily interpretable model (such as liner regression) on the predictions made by a black box model
  • Local Surrogate: LIME (for Local Interpretable Model-agnostic Explanations). Train individual models to approximate an individual prediction by removing features to learn their impact on the prediction
  • Shapley Value (SHAP): The contribution of each feature is measured by adding and removing it from all other feature subsets. The Shapley Value for one feature is the weighted sum of all its contributions

MLOps

Model evaluation

  • Classification:
    • Recall: wiki
    • Receiver operating characteristic (ROC): relates true positive rate (y-axis) and false positive rate (x-axis). TPR = TP / (TP + FN) and FPR = FP / (FP + TN). scikit
  • Regression
    • R2: strength of a linear relationship. Could be 0 for nonlinear relationships. Never worsens with more features. scikit
  • Learning curves scikit tutorial Learning Curve example
  • Overfitting and regularization
    • Overfitting (high variance) options: more data, increase regularization, or decrease model complexity. tutorial
    • Underfitting (high bias) options: decrease regularization, increase model complexity
    • Lasso regression: linear model regularization technique with tendency to prefer solutions with fewer non-zero coefficients. scikit tutorial. Lasso equation
    • Ridge regression: imposes a penalty on the size of the coefficients Ridge Regressionscikit
    • Validation curve: scikitvalidation curve example

Preprocessing

  • scikit
  • Analysis
    1. Remove duplicates
    2. SOCS of each feature: Shape (skew), Outliers, Center, Spread
    3. Feature correlation
  • Production pipeline
    1. Outliers: remove or apply non-linear transformations
    2. Missing values
      • SMOTE: Generate and place a new point on the vector between a minority class point and one of its nearest neighbors, located [0, 1] percent of the way from the original point. Algorithm is parameterized with k_neighbors. tutorial
    3. Standardization
    4. Discretization
    5. Encoding categorical features
    6. Generating polynomial features
    7. Dimensionality reduction

Reinforcement Learning

  • Reinforcement learning

SQL

  • window functions, row_number() and partition(): tutorial
  • COALESCE(): evaluates the arguments in order and returns the current value of the first expression that initially doesn’t evaluate to NULL. tutorial

Statistics

  • Statology tutorial
  • Means
    • Arithmetic: wolfram
    • Geometric: used in finance to calculate average growth rates and is referred to as the compounded annual growth rate. wolfram
    • Harmonic: used in finance to average multiples like the price-earnings ratio because it gives equal weight to each data point. Using a weighted arithmetic mean to average these ratios would give greater weight to high data points than low data points because price-earnings ratios aren't price-normalized while the earnings are equalized. wolfram
  • Probability distributions Description acronym SOCS: shape, outliers, center, spread. Comparison article. Correlation formula
  • Sample variance: divided by n-1 to achieve an unbiased estimator because 1 degree of freedom is used to estimate b0. tutorial
  • Tests Selecting statistical test. Source: Statistical Rethinking 2. Free Chapter 1
    • ANOVA: Analysis of variance compares the means of three or more independent groups to determine if there is a statistically significant difference between the corresponding population means. Statology tutorial
    • F-statistic: determines whether to reject a full model (F) in favor of a reduced (R) model. Reject full model if F is large — or equivalently if its associated p-value is small. tutorialF-statistic
    • Linear regression coefficient CI: tutorialt-interval for slope parameter beta_1
    • T-Test: tutorialT-test formula