Instructors: Prof. Dr. Sören Laue
Event type:
Lecture
Displayed in timetable as:
ML-VL
Hours per week:
4
Credits:
6,0
Language of instruction:
English
Min. | Max. participants:
- | 125
Comments/contents:
- Formal Foundations of Machine Learning (Minimization of Functions, Convexity, Underfitting, Overfitting, Model Complexity, Bias-Variance Tradeoff, Regularization, Maximum Likelihood, Maximum A Posteriori Principle, Empirical Risk Minimization, Regularized Risk Minimization)
- Supervised Learning for Regression and Classification
- Linear Methods, Basis Functions, Kernel Methods
- Logistic Regression, SVMs (Support Vector Machines)
- Naive Bayes
- Decision Trees, Random Forest
- k-Nearest Neighbor
- robust regression
- Linear and Quadratic Discriminant Analysis
- Methods of Unsupervised Learning
- Dimension Reduction (PCA - Principal Component Analysis, Multidimensional Scaling)
- Clustering (k-means)
- Recommender Systems (Matrix Factorization)
- Introduction to Neural Networks"
Learning objectives:
- In-depth knowledge of various approaches to learning from data, including an understanding of their respective limitations
- Ability to understand and apply the underlying ML (Machine Learning) theory
- Ability to comparatively evaluate learning methods in terms of specific application conditions
- Ability to systematically classify new methods
- Ability to design, implement, and evaluate a learning system for a given task
- Ability to present empirical findings in the field of algorithmic learning
Literature:
- Pattern Recognition and Machine Learning. Christopher M. Bishop, Springer, 2006. (online)
- The Elements of Statistical Learning. Trevor Hastie, Robert Tibshirani, and Jerome Friedman, Springer, 2009. (online)
- Deep Learning. Ian Goodfellow, Yoshua Bengio, and Aaron Courville, MIT Press, 2016. (online)
|