All Languages Are NOT Created (Tokenized) Equal
<p>Large language models such as ChatGPT process and generate text sequences by first splitting the text into smaller units called <strong>tokens</strong>. In the image below, each colored block represents a unique token. Short or common words such as “you”, “say”, “loud”, and “always” are its own token, whereas longer or less common words such as “atrocious”, “precocious”, and “supercalifragilisticexpialidocious” are broken into smaller subwords.</p>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:630/0*LlsaxJwwCix3jVAt.png" style="height:551px; width:700px" /></p>
<p>Visualization of tokenization of a short text using OpenAI’s tokenizer website. Screenshot taken by author.</p>
<p>This process of <strong>tokenization</strong> is not uniform across languages, leading to disparities in the number of tokens produced for equivalent expressions in different languages. For example, <strong>a sentence in Burmese or Amharic may require 10x more tokens than a similar message in English.</strong></p>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:630/0*PqsXeXMRYVfLj-mD.png" style="height:284px; width:700px" /></p>
<p>An example of the same message translated into five languages and the corresponding number of tokens required to tokenize that message (using OpenAI’s tokenizer). The text comes from Amazon’s MASSIVE dataset.</p>
<p>In this article, I explore the tokenization process and how it varies across different languages:</p>
<ul>
<li>Analysis of token distributions in a parallel dataset of short messages that have been translated into 52 different languages</li>
<li>Some languages, such as Armenian or Burmese, require <strong>9 to 10 times more tokens than English</strong> to tokenize comparable messages</li>
<li>The impact of this language disparity</li>
</ul>
<p><a href="https://towardsdatascience.com/all-languages-are-not-created-tokenized-equal-cd87694a97c1">Visit Now</a> </p>