TaatikNet: Sequence-to-Sequence Learning for Hebrew Transliteration
<p>Such tasks are known collectively as <strong>Sequence-to-Sequence (Seq2seq) Learning</strong>. In all of these tasks, the input and desired output are strings, which may be of different lengths and which are usually not in one-to-one correspondence with each other.</p>
<p>Suppose you have a dataset of paired examples (e.g. lists of sentences and their translations, many examples of misspelled and corrected texts, etc.). Nowadays, it is fairly easy to train a neural network on these as long as there is enough data so that the model may learn to generalize to new inputs. Let’s take a look at how to train seq2seq models with minimal effort, using PyTorch and the Hugging Face transformers library.</p>
<p><a href="https://towardsdatascience.com/taatiknet-sequence-to-sequence-learning-for-hebrew-transliteration-4c9175a90c23"><strong>Visit Now</strong></a></p>