Sklearn Pipelines for the Modern ML Engineer: 9 Techniques You Can’t Ignore
<p>Today, this is what I am selling:</p>
<pre>
awesome_pipeline.fit(X, y)</pre>
<p><code>awesome_pipeline</code> may look just like another variable, but here is what it does to poor <code>X</code> and <code>y</code> under the hood:</p>
<ol>
<li>Automatically isolates numerical and categorical features of <code>X</code>.</li>
<li>Imputes missing values in numeric features.</li>
<li>Log-transforms skewed features while normalizing the rest.</li>
<li>Imputes missing values in categorical features and one-hot encodes them.</li>
<li>Normalizes the target array <code>y</code> for good measure.</li>
</ol>
<p>Apart from collapsing almost 100 lines worth of unreadable code into a single line, <code>awesome_pipeline</code> can now be inserted into cross-validators or hyperparameter tuners, guarding your code from data leakage and making everything reproducible, modular, and headache-free.</p>
<p>Let’s see how to build the thing.</p>
<h2>0. Estimators vs transformers</h2>
<p>First, let’s get the terminology out of the way.</p>
<p>A transformer in Sklearn is any class or function that accepts features of a dataset, applies transformations, and returns them. It has <code>fit_transform</code> and <code>transform</code> methods.</p>
<p>An example is the <code>QuantileTransformer</code>, which takes numeric input(s) and makes them normally distributed. It is especially useful for features with outliers.</p>
<p><a href="https://towardsdatascience.com/sklearn-pipelines-for-the-modern-ml-engineer-9-techniques-you-cant-ignore-637788f05df5">Website</a></p>