A brief introduction to uncertainty calibration and reliability diagrams
<p>Uncertainty calibration is one of the most misunderstood concepts in machine learning. It can be encapsulated in this simple question: “Are you taking an umbrella given the above probabilities of rain?”</p>
<p>We use the concepts of subjective probability and uncertainty calibration in our daily life without realizing them. For a weather forecast model with well-calibrated uncertainty, it is probably not worthwhile to bring an umbrella if the probability of rain is only 5%. From a frequentist perspective, if the weather condition, in the above figure, at 7 a.m. can be observed repeatedly over a large number of random trials, only 5% of them are going to rain. On the other hand, however, if the uncertainties are ill-calibrated, it is likely that 40% of those random trials at 7 a.m. will end up raining — a big wet surprise.</p>
<p><a href="https://towardsdatascience.com/introduction-to-reliability-diagrams-for-probability-calibration-ed785b3f5d44"><strong>Click Here</strong></a></p>