Rejection and importance sampling

These are lecture notes from my Computer Science course.

We covered some of this last lecture; how to get random numbers in Python.

Why sample?

Some problems can be solved, but would take ages. Instead we can write a sampler to simulate the problem x number of times and use it as an estimate for the mean.

Most problems are to hard to just be solved, otherwise they wouldn’t be worth researching really.

Why does it work?

Theory of large numbers states that the deviation from actual theoretically converges to 0 the more sampling you do.

If your probability is close to 1, you will converge to it quicker. 0.5 is the ‘hardest’ to converge to through sampling.

Forward Sampling

Sample everything, starting with the start?

Rejection sampling

Rejection sampling is like forward sapling, but you only sample what you need (if you have conditional probabilities).

Problem is you throw loads of samples away, particularly if you have to sample a lot in advance.

How do you fix it?

Construct a new BN which represents the conditional distribution nd just use forward sampling. Particularly easy if instantiated variables have no parents (you can just through them away). Not terribly useful. Better is:

Importance Sampling

Unnormalised

if a distribution P is difficult to sample from, sample from a different distribution Q and weight each sample Xm by w(xml) = P(xm)/Q(xm) to compensate.

Then, you can estimate expected probability…..?

Normalised

Often, you don’t know P, or something, so you can’t do unnormalised sampling. Still confused.

Likelihood weighting

A special case of importance sampling. Something to do with altering forward sampling so that you set something yourself, and reduce the weighting of that. Or something.

This entry was posted in agm, lecture. Bookmark the permalink.