Spark Performance Tuning: Spill

<p><strong>Spill&nbsp;</strong>problem happens when the moving of an RDD (resilient distributed dataset, aka fundamental data structure in Spark) moves from RAM to disk and then back to RAM again.</p> <p>Simply put, this behavior occurs when a given data partition is too large to fit within the RAM of the executor. Spark will read and write the surplus data into disk to free up memory space in the local RAM for the remaining tasks within the job.</p> <p>This is an expensive process and slow!</p> <p><img alt="Simple Demonstrate of Spill Problem in Spark Applications" src="https://miro.medium.com/v2/resize:fit:413/1*264jTQwloaaYAHR-CK0e7w.png" style="height:270px; width:413px" /></p> <p>Simple Demonstrate of Spill Problem</p> <p>In this article, we will go through all of the below topics to create a clear understanding of the&nbsp;<strong>Spill&nbsp;</strong>problem in Spark.</p> <p><a href="https://selectfrom.dev/spark-performance-tuning-spill-7318363e18cb"><strong>Read More</strong></a></p>