FACET: Fairness in Computer Vision Evaluation Benchmark
<p>In this post we cover FACET, a new dataset created by Meta AI in order to evaluate a benchmark for fairness of computer vision models. Computer vision models are known to have biases that can impact their performance. For example, as we can see in the image below, given an image classification model, if we feed it with an image of a male soccer player, the model is likely to classify that image as a soccer player. However, if we introduce the model with a female soccer player, the model is more likely to be confused and may not classify the image correctly. This is just as an example for what fairness means, and later on, we’ll see real examples that FACET dataset helped to find.</p>
<p><img alt="Example for what fairness means in computer vision" src="https://miro.medium.com/v2/resize:fit:630/0*m_mUX8ITuPTyxjl1.png" style="height:227px; width:700px" /></p>
<p>Example for what fairness means in computer vision (Image by Author)</p>
<p>The FACET dataset was presented in a research paper titled FACET: Fairness in Computer Vision Evaluation Benchmark, and in the rest of this post, we’ll explain the research paper to understand what kind of data this dataset has, what we can do with that, and how it was created.</p>
<p><a href="https://medium.com/@aipapers/facet-fairness-in-computer-vision-evaluation-benchmark-7ea8d622b4e6">Click Here</a></p>