Measurement of Social Bias Fairness Metrics in NLP Models
<p>In recent times, text-generation-based models have become more popular than ever. With the introduction of ChatGPT and similar models, the population has been using the NLP models daily.</p>
<p>However, the use cases for NLP models are not limited to text generation; they include sentiment analysis, keyword extraction, named entity recognition, and more. These use cases predate the popularity of text generation models.</p>
<p>Despite its popularity, bias can still exist in NLP model algorithms. According to the paper by <a href="https://arxiv.org/pdf/2202.08176.pdf" rel="noopener ugc nofollow" target="_blank"><em>Pagano et al. (2022)</em></a>, machine learning models inherently need to consider the bias constraints of the algorithms. However, achieving full transparency is a huge challenge, especially considering the millions of parameters used by the model.</p>
<p>There are numerous categories of bias, such as temporal, spatial, behavioral, group, and social biases. The form these biases take can vary depending on the perspective adopted. However, this article will focus specifically on social bias and the metrics used to measure such biases in the context of Natural Language Processing (NLP) models.</p>
<p><a href="https://medium.datadriveninvestor.com/measurement-of-social-bias-fairness-metrics-in-nlp-models-a55769f0f685">Read More</a></p>