P.C.A. meets explainability
<p>This algorithm projects your data into another dimension, but with lower dimensionality. <strong>Speaking in simple terms, it reduces the number of columns</strong>. The disturbing fact is that if you start with a dataset that is readable and easy to interpret, with <strong>almost (hey, I said almost) </strong>any probability you will end up with a reduced number of column, but they are not easy to understand at all.</p>
<p>For example, let’s say you have a record of people and some information about them like Sex, Weight, Height, Favourite Movie, etc... . The feature are extremely clear, right? But if you apply a Principal Component Analysis, you will end up with less columns that are not easy to interpret: you don’t know what “column 1,2,3,4” are. And I mean…you may not care about it, but I think we all agree that it is not the best situation when you start loosing control about your features.</p>
<p><a href="https://towardsdatascience.com/p-c-a-meets-explainability-ba1ba5e4636"><strong>Click Here</strong></a></p>