Monte Carlo Approximation Methods: Which one should you choose and when?
<h2> </h2>
<p>Since deterministic inference is often intractable with probabilistic models as we saw just now, we now turn to approximation methods based on numerical sampling, which are known as <strong>Monte Carlo</strong> techniques. The key question we will look at with these methods is computing the expectation of a target function <em>f(z) </em>given a probability distribution <em>p(z)</em>. Recall that the simple definition of expectation is given as an integral:</p>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:630/1*sjsYsQA4LZcwcv6oixr7Jg.png" style="height:70px; width:700px" /></p>
<p>Source: PRML¹ Eq. 11.1</p>
<p>As we will see, these integrals are too<strong> computationally complex</strong>, so we will turn to <strong>sampling methods </strong>in this article.</p>
<p>In this article, we will look at 3 core sampling methods: <strong>inverse transformation</strong>, Markov chain Monte Carlo (MCMC), and <strong>Gibbs</strong> Sampling. By understanding the underlying statistical properties and computational requirements of these methods, we will learn that:</p>
<p><a href="https://towardsdatascience.com/monte-carlo-approximation-methods-which-one-should-you-choose-and-when-886a379fb6b">Visit Now</a></p>
<h2> </h2>