The main goal of Machine Learning (ML) is the development of systems that are able to autonomously change their behavior based on experience. ML offers some of the more effective techniques for knowledge discovery in large data sets. ML has played a fundamental role in areas such as bioinformatics, information retrieval, business intelligence and autonomous vehicle development.
The main goal of this course is to study the computational, mathematical and statistical foundations of ML, which are essential for the theoretical analysis of existing learning algorithms, the development of new algorithms and the well-founded application of ML to solve real-world problems.
1 Learning Foundations
1.1 Introduction
1.2 Bayesian decision theory
1.3 Estimation
1.4 Linear models
1.5 Design and analysis of ML experiments
2 Kernel Methods
2.1 Kernel methods basics
2.2 Support vector learning
3 Neural Networks
3.1 Neural networks basics
3.2 Deep learning
3.3 Convolutional neural networks
3.4 Recurrent neural networks
3.5 Deep generative models
4 Probabilistic Programming
4.1 Bayesian Methods
4.2 Monte Carlo inference
4.3 Variational Bayes
Week | Topic | Material | Assignments |
---|---|---|---|
Apr 3 | 1.1 Introduction | Brief Introduction to ML (slides) Linear Algebra and Probability Review (part 1 Linear Algebra, part 2 Probability) | Assignment 1 |
Apr 10 | 1.2 Bayesian decision theory | [Alp14] Chap 3 (slides) | |
Apr 17 | 1.3 Estimation | [Alp10] Chap 4, 5 (slides) Bias and variance (Jupyter notebook) | Assignment 2 |
Apr 24 May 1 | 1.5 Design and analysis of ML experiments | [Alp10] Chap 19 (slides) | |
May 8 | 2.1 Kernel methods basics | Introduction to kernel methods (slides) [Alp10] Chap 13 (slides) | |
May 15 | 2.2 Support vector learning | [Alp10] Chap 13 (slides) An introduction to ML, Smola Support Vector Machine Tutorial, Weston Máquinas de vectores de soporte y selección de modelos (Jupyter Notebook) | Assignment 3 |
May 22 | 3.1 Neural network basics | [Alp10] Chap 11 (slides) Quick and dirty introduction to neural networks (Jupyter notebook) Backpropagation derivation handout | |
May 29 | 3.2 Deep learning | Representation Learning and Deep Learning (slides) Representation Learning and Deep Learning Tutorial | |
Jun 5 | 3.2 Deep learning | Deep learning frameworks (slides) Introduction to TensorFlow (Jupyter notebook) Neural Networks in Keras (Jupyter notebook) | |
Jun 12 | 3.3 Convolutional neural networks | CNN for image classification en Keras (Jupyter notebook) ConvNetJS demos Feature visualization | In-class Assignment 1 |
Jun 19 | 3.4 Recurrent neural networks | CNN for text classification handout LSTM language model handout | Assignment 4 |
Jun 26 | 3.5 Deep generative models | Alexander Amini, Deep generative models (slides, video) (from MIT 6.S191) | |
Jul 3 | 4.1 Bayesian Methods 4.2 Monte Carlo inference | Radford M. Neal, Bayesian Methods for Machine Learning (slides) Beery et al., Markov Chain Monte Carlo for Machine Learning, Adv Topics in ML, Caltech (slides) Alex Rogozhnikov, Hamiltonian Monte Carlo explained | |
Jul 10 | 4.3 Variational Bayes | Variational Bayes in Tensorflow Variational Autoencoders in Tensorflow |