LLMs have shown their skills in recent months, demonstrating that they are proficient in a wide variety of tasks. All this through one mode of interaction: prompting.
In recent months there has been a rush to broaden the context of language models. But how does this affect a language model?
This article is divided into different sections, for each section we will answer these questions:
- What is a prompt and how to build a good prompt?
- What is the context window? How long it can be? What is limiting the length of the input sequence of a model? Why this is important?
- How we can overcome these limitations?
- Do the models use the long context window?