To get familiar
with various topics within the field of probabilistic graphical models
(PGMs) by reading and discussing research papers.
Format
There will be 4 groups formed, each consisting of 2-4 students:
Each group will discuss the papers within one chosen topic in
meetings (may be outside the course hours) scheduled with the lecturer.
The topic must be elaborated in a survey paper, where you summarise the topic, and express you own views about it (max. 10 pages, 10pt font size, A4).
The paper can be written individually or in groups. In the latter case, each student should made clear his/her contribution to the paper.
The survey paper should be submitted before 21nd June 2017, 24:00, via email to Peter Lucas (plucas AT liacs.nl).
Note: you will get a mark for the survey paper!
Topics
Philosophical foundations of reasoning with uncertainty
Book: BAI 1.1-1.5
Uncertainty in AI systems: an overview (Chapter 1; Pearl, 1988) [pdf]
Why isn't everyone a Bayesian? (Efron, 1986) [pdf]
Judgement under uncertainty: heuristics and biases (Tversky and Kahneman, 1974) [pdf]
Languages and designs for probabilistic judgement (Shafer and Tversky, 1985) [pdf]
Subjective probability (Kahneman and Tversky, 1972) [pdf]
Fallacies in legal reasoning (Fenton et al., 2008) [pdf]
Structure learning of BNs
Book BAI 6.3, chapter 8;
First paper on structure learning (Cooper and Herskovits, 1996)
[pdf]
Learning as search (Castelo and Kocka, 2003) [pdf]
Use of genetic algorithms in BN structure learning (Larranaga et al., 1996) [pdf]
Constraint-based learning (Cheng et al., 2008) [pdf]
Constraint-based learning follow-up (Chichering and Meek, 2006) [pdf]
The max-min hill-climbing Bayesian network structure learning algorithm
(Tsamardinos, Brown and Aliferis, 2006) [pdf]
Decision networks and influence diagrams Decision networks, a.k.a. influence diagrams, have been developed
by extending Bayesian networks to allow a compact representation and
reasoning under uncertainty in the structuring and analysis of
complex decision situations. This is done by providing a graphical
representation of the interrelationships among information (random
variables), preferences (utility variables) and actions (decision
variables) of the decision maker, and probabilistic assessment of the
expected utility given the quantified decision basis.
Book: BAI 4.1-4.4, 9.3.7
An anytime algorithm for evaluating unconstrained influence diagrams (Luque, Nielsen and Jensen) [pdf]
Influence diagrams (Howard and Matheson, 2005) [pdf]
A method for using Bayesian (belief) networks as influence diagrams (Cooper, 1988) [pdf]
Probabilistic inference During the lectures, a broad overview of inference methods has been
discussed, somewhat focused on Pearl's algorithm and other exact
belief propagations methods. Of course, in the last couple of decades
much more work done in this area. Some more specialised topics can be
studied in the seminar.
On probabilistic inference by weighted model counting (Chavira and Darwiche, 2008) [pdf]
LAZY propagation: a junction tree inference algorithm based on lazy evaluation (Madsen and Jensen, 1999) [pdf]
A simple approach to Bayesian network computations (Zhang and Poole, 1994) [pdf]
Exploiting causal independence in Bayesian network inference (Zhang and Poole, 1996) [pdf]
Probabilistic logics and relational learning In recent years, several practical approaches have appeared that aim
to integrate logic and probability theory. While Bayesian and Markov
networks are in some sense propositional, logical languages allow you
to reason about objects and classes of objects. These approaches
have, for example, been applied to learn relationships between objects
(relational learning).
Markov logic networks (Richardson and Domingos, 2006)
[pdf]
CP-Logic (Vennekens, Denecker and Bruynooghe, 2009)
[pdf]
The Independent Choice Logic for modelling multiple agents
under uncertainty (Poole, 1997)
[pdf]
Object-oriented Bayesian networks (Koller and Pfeffer, 1997)
[pdf]
Qualitative probabilistic networks (QPNs) Several formalisms have been developed that consider uncertainty
relationships that are qualitative, i.e., these approaches do not make
use of numerical values as in probability theory. These kind of
formalisms can be used to solve problems where numerical values are
neither necessary nor appropriate.
Book: BAI, page 250
Fundamental concepts of qualitative probabilistic networks (Wellman, 1990) [pdf]
Refining reasoning in qualitative probabilistic networks (Parsons, 1995) [pdf]
Bayesian network modelling through qualitative patterns (Lucas, 2005) [pdf]
Enhanced qualitative probabilistic networks for resolving trade-offs (Renooij and Van der Gaag, 2008) [pdf]
Exploiting expert knowledge in Bayesian network learning
How can you integrate expert knowledge in learning.
Image interpretation, control and cognitive modelling
Various studies have demonstrated the powerful modelling and reasoning
capabilities of PGMs by applying them to complex problems such as
brain signal analysis, image interpretation, and perception of
intentions and mental states in virtual agents.
Learning effective brain connectivity with dynamic Bayesian networks (Rajapakse and Zhou, 2007) [pdf]
Bayesian models of object perception (Kersten and Yuille, 2003) [pdf]
Modeling aspects of Theory of Mind with Markov random fields (Butterfield et al., 2009) [pdf]
Diagnose the mild cognitive impairment by constructing Bayesian network with missing data
(Sun, Tang, et al., 2011) [pdf]
Evidence-driven image interpretation by combining implicit and explicit knowledge in a Bayesian network
(Nikolopoulos, Papadopoulos, et al., 2011) [pdf]