Empowering Fairness: Recognizing and Addressing Bias in Generative Models
<p>In 2021, Princeton University’s Center for Information Technology Policy released a report where they found that machine learning algorithms can pick up biases similar to those of humans from their training data. One striking example of this effect is a study about the AI hiring tool from Amazon <strong>[1]</strong>. The tool was trained on resumes submitted to Amazon during the previous year and was ranking the different candidates. Due to the huge gender imbalance in tech positions over the past decade, the algorithm had learned language that would associate to women, such as women’s sport teams and would downgrade the rank of such resumes. This example highlights the necessity of not only fair and accurate models but datasets too, to remove bias during training. In the current context of the fast development of generative models such as ChatGPT and the integration of AI into our everyday lives, a biased model can have drastic consequences and erode the trust of users and global acceptance. Addressing these biases is thus necessary from a business perspective and Data Scientists (in a broad definition) have to be aware of them to mitigate them and make sure they are aligned with their principles.</p>
<p><a href="https://medium.com/towards-data-science/empowering-fairness-recognizing-and-addressing-bias-in-generative-models-1723ce3973aa"><strong>Read More</strong></a></p>