10 Confusing XGBoost Hyperparameters and How to Tune Them Like a Pro in 2023
<p>Today, I am going to show you how to squeeze XGBoost so hard that both ‘o’s pop out. We will achieve this by fine-tuning its hyperparameters to such an extent that it will no longer be able to <em>bst</em> after giving us all the performance it can.</p>
<p>This will not be a mere hyperparameter checklist post. Oh no. I will provide a detailed explanation of each of the ten hyperparameters, functionalities, accepted value ranges, best practices, and how to use Optuna for hyperparameter tuning.</p>
<p>Let’s dive in!</p>
<h2>What we wanted all along…</h2>
<p>A dumb underfit XGBoost model is virtually unheard of. Even with default parameter values, it performs reasonably well on many tabular tasks. However, its biggest problem lies in over-effing-fitting.</p>
<p>To address this issue, most of the XGBoost hyperparameters are put there to tame the underlying beast so that it doesn’t just swallow up the training set and burp up the bones during testing.</p>
<p>Therefore, through hyperparameter tuning, our goal is to strike the optimal balance between a complex model that overfits and a tamed, simple model that generalizes well to unseen data.</p>
<p><a href="https://towardsdatascience.com/10-confusing-xgboost-hyperparameters-and-how-to-tune-them-like-a-pro-in-2023-e305057f546">Website</a></p>