Such tasks are known collectively as Sequence-to-Sequence (Seq2seq) Learning. In all of these tasks, the input and desired output are strings, which may be of different lengths and which are usually not in one-to-one correspondence with each other.
Suppose you have a dataset of paired examples (e.g. lists of sentences and their translations, many examples of misspelled and corrected texts, etc.). Nowadays, it is fairly easy to train a neural network on these as long as there is enough data so that the model may learn to generalize to new inputs. Let’s take a look at how to train seq2seq models with minimal effort, using PyTorch and the Hugging Face transformers library.