Yet Another Machine Learning (YAML) Course, by Jacob Whitehill (jrwhitehill@wpi.edu)
These are the lecture slides from a machine learning course (CS453X) I 
taught for the first time in 2018 D-term. 
  - Lecture 1 (introduction to ML, accuracy & loss functions): PDF,  Keynote
- Lecture 2 (greedy step-wise classification, training versus testing): PDF,  Keynote
- Lecture 3 (linear regression): PDF,  Keynote
- Lecture 4 (more on linear regression): PDF,  Keynote
- Lecture 5 (gradient descent): PDF,  Keynote
- Lecture 6 (polynomial regression, overfitting): PDF,  Keynote
- Lecture 7 (regularization, logistic regression): PDF,  Keynote
- Lecture 8 (softmax regression, cross-entropy): PDF,  Keynote
- Lecture 9 (stochastic gradient descent, convexity): PDF,  Keynote
- Lecture 10 (positive semi-definiteness, constrained optimization): PDF, Keynote
- Lecture 11 (support vector machines): PDF, Keynote
- Lecture 12 (soft versus hard margin SVM, linear separability): PDF, Keynote
- Lecture 13 (kernelization): PDF, Keynote
- Lecture 14 (more on kernelization): PDF, Keynote
- Lecture 15 (Gaussian RBF kernel, nearest neighbors): PDF, Keynote
- Lecture 16 (principal component analysis): PDF, Keynote
- Lecture 17 (k-means): PDF, Keynote
- Lecture 18 (introduction to neural networks): PDF, Keynote
- Lecture 19 (more on neural networks, XOR problem): PDF, Keynote
- Lecture 20 (gradient descent for neural networks, Jacobian matrices): PDF, Keynote
- Lecture 21 (chain rule and backpropagation): PDF, Keynote
- Lecture 22 (L1 and L2 regularization, dropout): PDF, Keynote
- Lecture 23 (unsupervised pre-training, auto-encoders): PDF, Keynote
- Lecture 24 (convolution, pooling): PDF, Keynote
- Lecture 25 (convolutional neural networks, recurrent neural networks): PDF, Keynote
- Lecture 26 (practical suggestions): PDF, Keynote