Convergence in Probability or Distribution

<p>During your study of statistics, have you encountered the concepts of convergence in probability and convergence in distribution? Have you ever pondered why these concepts were introduced in the first place? If you have, then this story aims to help you answer some of those questions.</p> <h2><strong>Convergence in Probability</strong></h2> <p>Let&rsquo;s begin by delving into the concept of convergence in probability, as it is the more straightforward concept to grasp. Imagine we have a sequence of random variables:&nbsp;<em>X1</em>,&nbsp;<em>X2</em>, &hellip;,&nbsp;<em>Xn</em>, and as we let n approach infinity, if the probability that&nbsp;<em>Xn</em>&nbsp;is very close to x approaches 1, we can conclude that&nbsp;<em>Xn</em>&nbsp;converges to x in probability.</p> <p>Why is it defined in this manner? The rationale behind this definition stems from the fact that, regardless of how large n becomes, Xn will never precisely equal&nbsp;<em>x</em>&nbsp;(the constant). The most we can ascertain is to specify how close&nbsp;<em>Xn</em>&nbsp;must be to&nbsp;<em>x</em>&nbsp;in terms of the probability that&nbsp;<em>Xn</em>&nbsp;falls within a certain interval around&nbsp;<em>x</em>.</p> <p>Hence, our definition asserts that as n approaches infinity, the likelihood of&nbsp;<em>Xn</em>&nbsp;differing from&nbsp;<em>x</em>&nbsp;by an amount greater than &epsilon; diminishes to an infinitesimal level, ultimately approaching zero. Moreover, &epsilon; can be arbitrarily small.</p> <p>An illustrative example of convergence in probability would be the concept of the sample mean. Consider the scenario where we repeatedly draw n samples from a normal distribution with a mean of 0 and a standard deviation of 0.1. If we calculate the sample mean of these n samples, this resulting sample mean becomes a random variable denoted as Xn and possesses its own distribution.</p> <p><a href="https://r-shuo-wang.medium.com/convergence-in-probability-or-distribution-1766e08125cd">Visit Now</a></p>