False Positive Risk in A/B Testing

<p>Have you heard how there is a much greater probability&nbsp;<em>than generally expected</em>&nbsp;that a statistically significant test outcome is in fact a false positive? In industry jargon: that a variant has been identified as a &ldquo;winner&rdquo; when it is not. In demonstrating the above the terms&nbsp;<strong>&ldquo;False Positive Risk&rdquo; (FPR), &ldquo;False Findings Rate&rdquo; (FFR),&nbsp;</strong>and&nbsp;<strong>&ldquo;False Positive Report Probability&rdquo; (FPRP)</strong>&nbsp;are usually invoked and they all refer to an identical concept.</p> <p>What follows is a detailed explanation and exploration of that concept and its multiple proposed applications. False positive risk (FPR) is shown to either not be practically applicable or, when it is, it proves uninteresting due to inferiority to alternatives. Additionally, the proposed equation for calculating FPR is shown to not work and as a consequence previously published FPR estimates are revealed as inaccurate.</p> <p><a href="https://georgi-georgiev.medium.com/false-positive-risk-in-a-b-testing-ba2c76e258c4"><strong>Website</strong></a></p>
Tags: Positive Risk