Is There Always a Tradeoff Between Bias and Variance?

<p>The mean squared error (<a href="http://bit.ly/quaesita_babymse" rel="noopener ugc nofollow" target="_blank">MSE</a>) is the most popular (and vanilla) choice for a model&rsquo;s&nbsp;<a href="http://bit.ly/quaesita_emperorm" rel="noopener ugc nofollow" target="_blank">loss function</a>&nbsp;and it tends to be the first one you&rsquo;re taught. You&rsquo;ll likely take a whole bunch of&nbsp;<a href="http://bit.ly/quaesita_statistics" rel="noopener ugc nofollow" target="_blank">stats</a>&nbsp;classes before it occurs to anyone to tell you that you&rsquo;re welcome to minimize other loss functions if you like. (But let&rsquo;s be real:&nbsp;<a href="http://bit.ly/quaesita_msefav" rel="noopener ugc nofollow" target="_blank">parabolae are super easy to optimize</a>. Remember&nbsp;<em>d/dx</em>&nbsp;<em>x&sup2;</em>? 2<em>x</em>. That convenience is enough to keep most of you loyal to the MSE.)</p> <p>Once you learn about the MSE, it&rsquo;s usually mere&nbsp;<a href="http://bit.ly/quaesita_lemur" rel="noopener ugc nofollow" target="_blank">moments</a>&nbsp;until someone mentions the bias and variance formula:</p> <p><a href="https://towardsdatascience.com/is-there-always-a-tradeoff-between-bias-and-variance-5ca44398a552"><strong>Website</strong></a></p>
Tags: Bias Variance