FACET: A Benchmark Dataset for Fairness in Computer Vision
<p>As computer vision models have rapidly improved, the problem of <em>bias</em> has increasingly reared its head and become increasingly pronounced: even models that achieve state-of-the-art performance on one-number metrics like mean average precision or F1 score can vary wildly in their ability to generate predictions for people of different demographics, genders, and skin tones. If you’re curious to learn more about how models can learn human biases, check out <a href="https://arxiv.org/pdf/2010.15052.pdf" rel="noopener ugc nofollow" target="_blank">this paper</a> (cited by the FACET team).</p>
<p>In an effort to address these biases, a team at Meta has released <a href="https://ai.meta.com/datasets/facet/" rel="noopener ugc nofollow" target="_blank">FACET</a> (FAirness in Computer Vision EvaluaTion), a new benchmark dataset for studying and evaluating the “fairness” in computer vision models. When developing FACET, the team set out to create the most comprehensive, diverse fairness benchmark dataset to date.</p>
<p><a href="https://medium.com/voxel51/facet-a-benchmark-dataset-for-fairness-in-computer-vision-2260c82e1662"><strong>Website</strong></a></p>