Post-hoc analysis of Arabic transformer models.

<p>Analyze the internal representations of Arabic transformer models.</p> <p>While there&rsquo;ve been an extrinsic evaluation of Arabic transformer (AT) models,&nbsp;<em>no work has been carried out to analyze their internal representations.</em><br /> This<a href="https://arxiv.org/abs/2210.09990" rel="noopener ugc nofollow" target="_blank">&nbsp;paper</a>&nbsp;probe how&nbsp;<strong>Arabic linguistic information</strong>&nbsp;is encoded in AT models. The authors performed a&nbsp;<strong>layer &amp; neuron analysis</strong>&nbsp;on the models using&nbsp;<strong>morphological tagging</strong>&nbsp;<strong>tasks</strong>&nbsp;for different&nbsp;<strong>dialects&nbsp;</strong>and a&nbsp;<strong>dialectal identification task</strong>.</p> <p>The overall idea is to extract feature vectors from the learned representations and train probing classifiers towards understudied auxiliary tasks (of predicting morphology or identifying dialect).<br /> Additionally, they used&nbsp;<strong>the Linguistic Correlation Analysis method&nbsp;</strong>to identify salient neurons with respect to a downstream task.<br /> The analysis enlightens interesting findings such as:</p> <p><a href="https://medium.com/@Mustafa77/post-hoc-analysis-of-arabic-transformer-models-95790745c712"><strong>Website</strong></a></p>