Descriptif
This course is an introduction to Bayesian methods for machine learning. As a
first go, the main ingredients of Bayesian thinking are presented and typical situations
where a Bayesian treatment of the learning task is useful are exemplified. Particular
attention is payed to the links between regularization methods and specification of
a prior distribution. The second part of this course concerns the computational
challenges of Bayesian analysis. Major approximation methods such as variational
Bayes, Monte-Carlo-Markov-Chain sampling and sequential sampling schemes are
introduced and implemented in lab session.
Format
6 3:5 hours + exam
Programing language
R
Grading
mini-project (lab) (40%) + written exam (60%)
Syllabus
Week 1: Bayesian learning: basics.
Bayesian model, prior-posterior, examples.
Point and interval estimation.
Prior choice, examples, exponential family
A glimpse at asymptotics and at computational challenges
Reading: Berger (2013), chapter 1; Bishop (2006) chapters 1 and 2, Robert
(2007) chapter 1, Ghosh et al. (2007), chapter 2 and Robert and Casella (2010),
chapter 1 for basic R programming.
Week 2: Bayesian modeling and decision theory
Naïve Bayes, KNN
Bayesian Linear Regression
Bayesian decision theory.
Reading: Bishop (2006), chapter 3; Berger (2013) chapter 4; Robert (2007)
chapter 2.
Week 3: Lab session
Weeks 4: Approximation methods EM and Variational Bayes, examples.
Reading: Bishop (2006), Chapter 10.
Week 5 : Sampling methods Monte-Carlo methods, importance samplnig,
MCMC (Metropolis-Hastings and Gibs), examples.
If time allows it: sequential methods (particle filtering)
Reading Robert and Casella (2010), (bits of) chapters 3, 4, 6, 7, 8.
Week 6: Lab session.
approximation and sampling methods.
Diplôme(s) concerné(s)
Format des notes
Numérique sur 20Littérale/grade réduitPour les étudiants du diplôme Data Sciences
Le rattrapage est autorisé (Max entre les deux notes)- Crédits ECTS acquis : 2.5 ECTS