Gradient Boosting from Theory to Practice (Part 2)

In the first part of this article, we presented the gradient boosting algorithm and showed its implementation in pseudocode.

In this part of the article, we will explore the classes in Scikit-Learn that implement this algorithm, discuss their various parameters, and demonstrate how to use them to solve several classification and regression problems.

Although the XGBoost library (which will be covered in a future article) provides a more optimized and highly scalable implementation of gradient boosting, for small to medium-sized data sets it is often easier to use the gradient boosting classes in Scikit-Learn, which have a simpler interface and a significantly fewer number of hyperparameters to tune.

Read More

Tags: Practice