Member-only story Feature Transformations: A Tutorial on PCA and LDA
<h2>Introduction</h2>
<p>When dealing with high-dimension data, it is common to use methods such as Principal Component Analysis (PCA) to reduce the dimension of the data. This converts the data to a different (lower dimension) set of features. This contrasts with feature subset selection which selects a subset of the original features (see <a href="https://medium.com/towards-data-science/feature-subset-selection-6de1f05822b0" rel="noopener">[1]</a> for a turorial on feature selection).</p>
<p>PCA is a linear transformation of the data to a lower dimension space. In this article we start off by explaining what a linear transformation is. Then we show with Python examples how PCA works. The article concludes with a description of Linear Discriminant Analysis (LDA) a <em>supervised</em> linear transformation method. Python code for the methods presented in that paper is available on <a href="https://github.com/PadraigC/FeatTransTutorial" rel="noopener ugc nofollow" target="_blank">GitHub</a>.</p>
<p><a href="https://medium.com/towards-data-science/feature-transformations-a-tutorial-on-pca-and-lda-1ac160088092"><strong>Read More</strong></a></p>