Language Models and Friends: Gorilla, HuggingGPT, TaskMatrix, and More
<p>Recently, we have witnessed a rise of foundation models to popularity within deep learning research. Pre-trained large language models (LLMs) have led to a new paradigm, in which a single model can be used — with surprising success — to solve many different problems. Despite the popularity of generic LLMs, however, fine-tuning models in a task-specific manner tends to outperform approaches that leverage foundation models. Put simply, <em>specialized models are still very hard to beat</em>! With this being said, we might start to wonder whether the powers of foundation models and specialized deep learning models can be combined. Within this overview, we will study recent research that integrates LLMs with other, specialized deep learning models by learning to call their associated APIs. The resulting framework uses the language model as a centralized controller that forms a plan for solving a complex, AI-related tasks and delegates specialized portions of the solution process to more appropriate models.</p>
<p><a href="https://towardsdatascience.com/language-models-and-friends-gorilla-hugginggpt-taskmatrix-and-more-b88c1200afd3">Website</a></p>