The main parameters in XGBoost and their effects on model performance

<p>The learning rate controls the step size at which the optimizer makes updates to the weights. A smaller eta value results in slower but more accurate updates, while a larger eta value results in faster but less accurate updates. It is common to start with a relatively high value and then gradually decrease it. For example, you can start with eta = 0.1 and decrease it by a factor of 0.1 every 10 rounds. However, setting a too small eta value can lead to slow convergence and a too high value can lead to underfitting.</p> <p><a href="https://medium.com/@rithpansanga/the-main-parameters-in-xgboost-and-their-effects-on-model-performance-4f9833cac7c"><strong>Learn More</strong></a></p>
Tags: XGBoost