Monte Carlo Approximation Methods: Which one should you choose and when?

<h2>&nbsp;</h2> <p>Since deterministic inference is often intractable with probabilistic models as we saw just now, we now turn to approximation methods based on numerical sampling, which are known as&nbsp;<strong>Monte Carlo</strong>&nbsp;techniques. The key question we will look at with these methods is computing the expectation of a target function&nbsp;<em>f(z)&nbsp;</em>given a probability distribution&nbsp;<em>p(z)</em>. Recall that the simple definition of expectation is given as an integral:</p> <p><img alt="" src="https://miro.medium.com/v2/resize:fit:630/1*sjsYsQA4LZcwcv6oixr7Jg.png" style="height:70px; width:700px" /></p> <p>Source: PRML&sup1; Eq. 11.1</p> <p>As we will see, these integrals are too<strong>&nbsp;computationally complex</strong>, so we will turn to&nbsp;<strong>sampling methods&nbsp;</strong>in this article.</p> <p>In this article, we will look at 3 core sampling methods:&nbsp;<strong>inverse transformation</strong>, Markov chain Monte Carlo (MCMC), and&nbsp;<strong>Gibbs</strong>&nbsp;Sampling. By understanding the underlying statistical properties and computational requirements of these methods, we will learn that:</p> <p><a href="https://towardsdatascience.com/monte-carlo-approximation-methods-which-one-should-you-choose-and-when-886a379fb6b">Visit Now</a></p> <h2>&nbsp;</h2>