I contemplated using Principal Component Analysis (PCA) for one of my recent projects in Machine Learning (ML) with Python as we were trying to figure out and eliminate some redundant features in our data.
As it turned out, PCA wasn’t useful for doing what we were trying to do and we had to use another algorithm for feature elimination. Nevertheless, it allowed me to dig deeper in PCA and figure out how it exactly works.
My key leanings from the exercise about PCA – PCA can reduce dimensions, but not eliminate them. PCA doesn’t eliminate dimensions and keeps others from the original data. It transforms your data in a number of dimensions whose data are completely different from the original ones. And that made us chose another algorithm for dealing with our problem which needed us to eliminate few few features and run Logistic Regression (LR) on that.
Remembered all that while reading this post showing beautiful visualization of PCA extracting features from photos. It is wonderful to read about recreating those images by reducing components. The image breaking down is done with faces and he has chosen fashion to illustrate PCA. Brilliant!!! It even contains link to code on GitHub. 🙂
Definitely worth reading if you are interested in Principal Component Analysis (PCA), Eigenvalues or Machine Learning (ML)!