Uncertainty calibration is one of the most misunderstood concepts in machine learning. It can be encapsulated in this simple question: “Are you taking an umbrella given the above probabilities of rain?”
We use the concepts of subjective probability and uncertainty calibration in our daily life without realizing them. For a weather forecast model with well-calibrated uncertainty, it is probably not worthwhile to bring an umbrella if the probability of rain is only 5%. From a frequentist perspective, if the weather condition, in the above figure, at 7 a.m. can be observed repeatedly over a large number of random trials, only 5% of them are going to rain. On the other hand, however, if the uncertainties are ill-calibrated, it is likely that 40% of those random trials at 7 a.m. will end up raining — a big wet surprise.