Course description
Instructor
Fabio A. González
Maestría en Ingeniería de Sistemas y Computación
Universidad Nacional de Colombia
Course goal
The main goal of Machine Learning (ML) is the development of systems that are able to autonomously change their behavior based on experience. ML offers some of the more effective techniques for knowledge discovery in large data sets. ML has played a fundamental role in areas such as bioinformatics, information retrieval, business intelligence and autonomous vehicle development.
The main goal of this course is to study the computational, mathematical and statistical foundations of ML, which are essential for the theoretical analysis of existing learning algorithms, the development of new algorithms and the well-founded application of ML to solve real-world problems.
Course topics
1 Learning Foundations
1.1 Introduction
1.2 Bayesian decision theory
1.3 Estimation
1.4 Linear models
1.5 Design and analysis of ML experiments
2 Kernel Methods
2.1 Kernel methods basics
2.2 Support vector learning
3 Neural Networks
3.1 Neural networks basics
3.2 Deep learning
3.3 Convolutional neural networks
3.4 Recurrent neural networks
3.5 Deep generative models
4 Probabilistic Programming
4.1 Bayesian Methods
4.2 Monte Carlo inference
4.3 Variational Bayes
Evaluation and grading policy
- Assignments 30%
- Quices 20%
- Exam 30%
- Final project 20%
Course resources
References
- [Alp14] Alpaydin, E. Introduction to Machine Learning, 3Ed. The MIT Press, 2014
- [Mur12] Murphy, Kevin P. Machine learning: a probabilistic perspective. The MIT Press, 2012.
- [Barber2013] Barber, David, Bayesian Reasoning and Machine Learning, Cambridge University Press, 2013.
- [Bis06] Bishop, C. Pattern Recognition and Machine Learning. Springer-Verlag, 2006
- [HTF09] Hastie, T. and Tibshirani, R. and Friedman. The elements of statistical learning: data mining, inference, and prediction, Springer, 2009
- [GBC2016] Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
- [Mit97] Mitchell, T. M. 1997 Machine Learning. 1st. McGraw-Hill Higher Education.
- [DHS00] Duda, R. O., Hart, P. E., and Stork, D. G. 2000 Pattern Classification (2nd Edition). Wiley-Interscience.
- [SC04] Shawe-Taylor, J. and Cristianini, N. 2004 Kernel Methods for Pattern Analysis. Cambridge University Press.
- [SS02] Scholkopf, B. and Smola, A.J., 2002, Learning with kernels, MIT Press.
- [OCW-ML] 6.867 Machine Learning, Fall 2006, MIT OpenCourseWare.
- [STANFD-ML] Andrew Ng, CS229 Machine Learning, Stanford University
Additional resources
- SciPy: scientific, mathematical, and engineering package for Python
- scikit-learn: machine learning Scipy add-on
- Kaggle: datascience competition, many interesting data sets and different competitions with prizes.
- Coursera Machine Learning Course: one of the first (and still one of the best) machine learning MOOCs taught by Andrew Ng.
- Stanford Statistical Learning Course: an introductory course with focus in supervised learning and taught by Trevor Hastie and Rob Tibshirani.
Course schedule
| Week | Topic | Material | Assignments |
|---|---|---|---|
| Apr 3 | 1.1 Introduction | Brief Introduction to ML (slides) Linear Algebra and Probability Review (part 1 Linear Algebra, part 2 Probability) | Assignment 1 |
| Apr 10 | 1.2 Bayesian decision theory | [Alp14] Chap 3 (slides) | |
| Apr 17 | 1.3 Estimation | [Alp10] Chap 4, 5 (slides) Bias and variance (Jupyter notebook) | Assignment 2 |
| Apr 24 May 1 | 1.5 Design and analysis of ML experiments | [Alp10] Chap 19 (slides) | |
| May 8 | 2.1 Kernel methods basics | Introduction to kernel methods (slides) [Alp10] Chap 13 (slides) | |
| May 15 | 2.2 Support vector learning | [Alp10] Chap 13 (slides) An introduction to ML, Smola Support Vector Machine Tutorial, Weston Máquinas de vectores de soporte y selección de modelos (Jupyter Notebook) | Assignment 3 |
| May 22 | 3.1 Neural network basics | [Alp10] Chap 11 (slides) Quick and dirty introduction to neural networks (Jupyter notebook) Backpropagation derivation handout | |
| May 29 | 3.2 Deep learning | Representation Learning and Deep Learning (slides) Representation Learning and Deep Learning Tutorial | |
| Jun 5 | 3.2 Deep learning | Deep learning frameworks (slides) Introduction to TensorFlow (Jupyter notebook) Neural Networks in Keras (Jupyter notebook) | |
| Jun 12 | 3.3 Convolutional neural networks | CNN for image classification en Keras (Jupyter notebook) ConvNetJS demos Feature visualization | In-class Assignment 1 |
| Jun 19 | 3.4 Recurrent neural networks | CNN for text classification handout LSTM language model handout | Assignment 4 |
| Jun 26 | 3.5 Deep generative models | Alexander Amini, Deep generative models (slides, video) (from MIT 6.S191) | |
| Jul 3 | 4.1 Bayesian Methods 4.2 Monte Carlo inference | Radford M. Neal, Bayesian Methods for Machine Learning (slides) Beery et al., Markov Chain Monte Carlo for Machine Learning, Adv Topics in ML, Caltech (slides) Alex Rogozhnikov, Hamiltonian Monte Carlo explained | |
| Jul 10 | 4.3 Variational Bayes | Variational Bayes in Tensorflow Variational Autoencoders in Tensorflow |