Empowering Fairness: Recognizing and Addressing Bias in Generative Models
<p>In 2021, Princeton University’s Center for Information Technology Policy released a report where they found that machine learning algorithms can pick up biases similar to those of humans from their training data. One striking example of this effect is a study about the AI hiring tool from Amazon <strong>[1]</strong>. The tool was trained on resumes submitted to Amazon during the previous year and was ranking the different candidates. Due to the huge gender imbalance in tech positions over the past decade, the algorithm had learned language that would associate to women, such as women’s sport teams and would downgrade the rank of such resumes. This example highlights the necessity of not only fair and accurate models but datasets too, to remove bias during training. In the current context of the fast development of generative models such as ChatGPT and the integration of AI into our everyday lives, a biased model can have drastic consequences and erode the trust of users and global acceptance. Addressing these biases is thus necessary from a business perspective and Data Scientists (in a broad definition) have to be aware of them to mitigate them and make sure they are aligned with their principles.</p>
<h1>Examples of Biases in Generative Models</h1>
<p>The first type of task where generative models are widely used that comes to mind is a translation task. Users input a text in language A and expect a translation in language B. Different languages don’t necessarily use the same type of gendered pronouns, for example <em>“The senator”</em> in English could be either feminine or masculine, as in French it would be <em>“La senatrice”</em> or <em>“Le senateur”</em>. Even in the case of the gender being specified in the sentence (example below), it is not uncommon for generative model to reinforce gender stereotype roles during the translation.</p>
<p><a href="https://towardsdatascience.com/empowering-fairness-recognizing-and-addressing-bias-in-generative-models-1723ce3973aa"><strong>Read More</strong></a></p>