Gradient Boosting from Theory to Practice (Part 2)
<p>In the <a href="https://medium.com/towards-data-science/gradient-boosting-from-theory-to-practice-part-1-940b2c9d8050" rel="noopener">first part</a> of this article, we presented the gradient boosting algorithm and showed its implementation in pseudocode.</p>
<p>In this part of the article, we will explore the classes in Scikit-Learn that implement this algorithm, discuss their various parameters, and demonstrate how to use them to solve several classification and regression problems.</p>
<p>Although the XGBoost library (which will be covered in a future article) provides a more optimized and highly scalable implementation of gradient boosting, for small to medium-sized data sets it is often easier to use the gradient boosting classes in Scikit-Learn, which have a simpler interface and a significantly fewer number of hyperparameters to tune.</p>
<p><a href="https://medium.com/towards-data-science/gradient-boosting-from-theory-to-practice-part-2-25c8b7ca566b"><strong>Read More</strong></a></p>