A Quick Fix for Your Sluggish Python Code
<p>We are using resource-intensive techniques such as LLMs and Generative AI a lot these days.</p>
<p>Whoever uses precious resources would know how daunting it is to do the same task again, even though we know the results will be the same. You’d blame yourself for not storing the results of its previous run.</p>
<p>This is where the LRU cache helps us. LRU stands for Least Recently Used. It’s one of the many caching strategies. Let’s first understand how it works.</p>
<h2>How does @lru_cache work in Python?</h2>
<p>Imagine your brain is a small toy box. It can only fit five toys. Your friends keep asking you about different toys, and you use your superhero memory to remember and tell stories about these toys.</p>
<p>Some toys are easy to find because your friends often ask about them, so they’re at the top. Some toys are harder to find because they’re at the bottom of the box.</p>
<p>After you tell a story about a toy, you put it back on top to make things easier. That way, the toys your friends ask about the most are always easy to find. This is called the “Least Recently Used” or LRU strategy.</p>
<p>And if you get a new toy, but the box is full, you remove the toy that hasn’t been asked about for the longest time. If a friend asks about it, you can still find it in your big toy warehouse, which takes longer. That’s how LRU caching works!</p>
<p><a href="https://towardsdatascience.com/performance-fix-for-slow-python-lru-cache-f9a454776716">Read More</a></p>