v2.11.0 (5757)

Cours scientifiques - MAP566A : Introduction to Machine Learning

Domaine > Mathématiques appliquées.

Descriptif

The lecture will mostly be based on the book "Probabilistic Machine Learning: An Introduction" by Kevin Murphy. The following topics will be discussed, for a total of 28 sessions of 2h each.

General theory (4 sessions)
- Introduction to machine learning
- Elements of statistical learning

Linear methods (8 sessions)
- Linear methods for regression (4 sessions): least squares linear regression; regularization to avoid overfitting: ridge and lasso; model selection
- Linear methods for classification (3 sessions): linear discriminant analysis, logistic regression, including SGD for performing maximum likelihood estimations
- Linear methods for unsupervised learning (1 session): PCA and an introduction to factor analysis

Nonlinear methods for supervised learning (10 sessions)
- Decision trees and ensemble methods (3 sessions): classification and regression trees; ensemble learning, including random forests; boosting
- Nonparametric methods (3 sessions): Mercer kernels; Gaussian processes; support vector machines
- Neural networks (4 sessions): feedforward neural networks; training neural networks: computing gradients, using preconditioned gradient methods, regularization strategies, etc.

Nonlinear methods for unsupervised learning (6 sessions)
- Clustering methods (3 sessions): k-nearest neighbors; k-means; hierarchical clustering; Gaussian mixture models
- Nonlinear dimension reduction techniques (3 sessions): autoencoders; manifold learning

Objectifs pédagogiques

This lecture provides an introduction to various concepts and algorithms in Machine Learning -- a field where data is used to gain experience or make predictions (as opposed to fields where data is produced as a result of modeling, e.g. when modeling natural phenomena using partial differential equations). Tasks include classification, regression, ranking, clustering, dimensionality reduction. This can be done in a supervised or unsupervised manner, depending on whether data is labelled or not; and sometimes in an active way using reinforcement learning.

From a mathematical viewpoint, Machine Learning is a very broad field, which uses elements from linear algebra and optimization to devise numerical methods, and techniques of scientific computing and computer science to implement efficient algorithms. From a more abstract perspective, elements of statistical learning theory allow to give rigorous foundations to the various methods at hand, by formalizing concepts such as risk minimization, the bias-complexity tradeoff, overfitting, cross-validation, etc.

Hands-on sessions in the form of jupyter notebooks complement the lecture. They allow students to assess the successes and limitations of the most popular methods, first on toy synthetic data (for the ease of visualization and a complete understanding of the behavior of the algorithms), and then on more relevant datasets such as MNIST, or, in order to somewhat depart from models too often seen in introductory courses on Machine Learning, examples from physics such as the Ising model.
From a practical perspective, this will be done from scratch for simple methods, and using Scikit-learn for more advanced techniques except neural network models for which PyTorch will be used.

Pour les étudiants du diplôme M1 Applied Mathematics and statistics

Pré-requis : Cours de base en probabilités et statistiques.

Format des notes

Numérique sur 20

Littérale/grade réduit

Pour les étudiants du diplôme M2 Machine Learning, Communications and Security

Pour les étudiants du diplôme M1 Applied Mathematics and statistics

Le rattrapage est autorisé (Max entre les deux notes)
    L'UE est acquise si note finale transposée >= C
    • Crédits ECTS acquis : 7.5 ECTS

    Programme détaillé

    The lecture will mostly be based on the book "Probabilistic Machine Learning: An Introduction" by Kevin Murphy. The following topics will be discussed, for a total of 28 sessions of 2h each.

    General theory (4 sessions)
    - Introduction to machine learning
    - Elements of statistical learning

    Linear methods (8 sessions)
    - Linear methods for regression (4 sessions): least squares linear regression; regularization to avoid overfitting: ridge and lasso; model selection
    - Linear methods for classification (3 sessions): linear discriminant analysis, logistic regression, including SGD for performing maximum likelihood estimations
    - Linear methods for unsupervised learning (1 session): PCA and an introduction to factor analysis

    Nonlinear methods for supervised learning (10 sessions)
    - Decision trees and ensemble methods (3 sessions): classification and regression trees; ensemble learning, including random forests; boosting
    - Nonparametric methods (3 sessions): Mercer kernels; Gaussian processes; support vector machines
    - Neural networks (4 sessions): feedforward neural networks; training neural networks: computing gradients, using preconditioned gradient methods, regularization strategies, etc.

    Nonlinear methods for unsupervised learning (6 sessions)
    - Clustering methods (3 sessions): k-nearest neighbors; k-means; hierarchical clustering; Gaussian mixture models
    - Nonlinear dimension reduction techniques (3 sessions): autoencoders; manifold learning

    Veuillez patienter