Can I Trust My Model’s Probabilities? A Deep Dive into Probability Calibration

<p>Suppose you have a binary classifier and two observations; the model scores them as&nbsp;<code>0.6</code>&nbsp;and&nbsp;<code>0.99</code>, respectively. Is there a higher chance that the sample with the&nbsp;<code>0.99</code>&nbsp;score belongs to the positive class? For some models, this is true, but for others it might not.</p> <p>This blog post will dive deeply into probability calibration-an essential tool for every data scientist and machine learning engineer. Probability calibration allows us to ensure that higher scores from our model are more likely to belong to the positive class.</p> <p>The post will provide reproducible code examples with open-source software so you can run it with your data! We&rsquo;ll use&nbsp;<a href="https://github.com/ploomber/sklearn-evaluation?utm_source=medium&amp;utm_medium=blog&amp;utm_campaign=calibration-curve" rel="noopener ugc nofollow" target="_blank">sklearn-evaluation</a>&nbsp;for plotting and&nbsp;<a href="https://github.com/ploomber/ploomber?utm_source=medium&amp;utm_medium=blog&amp;utm_campaign=calibration-curve" rel="noopener ugc nofollow" target="_blank">Ploomber</a>&nbsp;to execute our experiments in parallel.</p> <p><a href="https://towardsdatascience.com/can-i-trust-my-models-probabilities-a-deep-dive-into-probability-calibration-fc3886cfc677"><strong>Click Here</strong></a></p>