T5: Text-to-Text Transformers (Part One)
<p>The transfer learning paradigm is comprised of two main stages. First, we pre-train a deep neural network over a bunch of data. Then, we fine-tune this model (i.e., train it some more) over a more specific, downstream dataset. The exact implementation of these stages may take many different forms. In computer vision, for example, we often pre-train models on the ImageNet dataset using a supervised learning objective. Then, these models perform supervised fine-tuning on the downstream dataset (i.e., the task that we are actually trying to solve). Alternatively, in natural language processing (NLP), we often perform <a href="https://cameronrwolfe.substack.com/i/76273144/self-supervised-learning" rel="noopener ugc nofollow" target="_blank">self-supervised</a> pre-training over an unlabeled textual corpus.</p>
<p>Combining large, deep neural networks with massive (pre-)training datasets often leads to impressive results. This finding was found to be especially true for NLP. Given that raw textual data is freely available on the internet, we can simply download a massive textual corpus, pre-train a large neural net on this data, then fine-tune the model on a variety of downstream tasks (or just use zero/few-shot learning techniques). This large-scale transfer learning approach was initially explored by BERT [2], which pre-trained a transformer encoder over unlabeled data using a masking objective, then fine-tuned on downstream language tasks.</p>
<p>The success of BERT [2] cannot be overstated (i.e., new state-of-the-art performance on nearly all language benchmarks). As a result, the NLP community began to heavily investigate the topic of transfer learning, leading to the proposal of many new extensions and improvements. Due to the rapid development in this field, comparison between alternatives was difficult. The text-to-text transformer (T5) model [1] proposed a unified framework for studying transfer learning approaches in NLP, allowing us to analyze different settings and derive a set of best practices. This set of best practices comprise T5, a state-of-the-art model and training framework for language understanding tasks.</p>
<p><a href="https://towardsdatascience.com/t5-text-to-text-transformers-part-one-6b655f27c79a">Visit Now</a></p>