# These are lecture notes from my Computer Science course.

Given a join tree, how would we computer the probability of just ‘A’ for example. With variable elimination we’d have to sum it all out. With a join tree however, we can look at the structure of the data.

There is a way of sending messages both ways involving ‘scoring’ messages towards the root, then sending messages back out.

This is a message passing algorithm.

The Python code for this algorithm is possibly the easiest way to understand this.

This ‘calibration’ algorithm results in marginal distributions everywhere. This makes it really really easy to get the results for single variables, you just find a marginal distribution with that variable in, and just sum out those last couple of variables.

Only works on decomposable hypergraphs, but you can make a decomposable hypergraph by finding a (hopefully) small fill-in for the interaction graph.

Problem is this takes exponential time related to the largest clique. The solution? Approximate algorithms.

This entry was posted in agm, lecture. Bookmark the permalink.