Video

So, are you passionate about everything that you do?

So, are you passionate about everything that you do?
You could be a Multipotentialite!

I got hooked up by the question itself. It is quite intriguing for someone like me who gets pulled by diverse interests all the time.

In this interesting TEDx talk, Emilie Wapnick discusses ‘Why Some of us Don’t Have One True Calling’ and how it could actually be a good thing.

I had written a blog post about Career Concepts & Career Paths based on the work of Michael Driver and Ken Brousseau of the University of Southern California. Spiral career concept is an interesting case study and this TEDx talk reminded me of that as I could see quite a few parallels about getting bored and moving on to new learning and new jobs/careers. There have been other terminologies for this, including this one that I love: Renaissance Souls. Anyway!

Usually, specialists (experts) are valued more by most businesses and they tend to get more attention/adulation. But it is crucial for businesses as well as for those Renaissance Souls themselves to realize that they bring unique perspective with their diverse interests and their intersections, which is invaluable. Watch Emilie Wapnick talk about all this in her TEDx talk here –

Video

Essential Machine Learning Algorithms in a nutshell

I am sharing some brief but insightful videos that explain the essential Machine Learning (ML) algorithms quite well. All these videos are part of Data Science and Machine Learning Essentials course by Microsoft on edX platform.

If you’re interested in learning Machine Learning thoroughly, I would highly recommend longer Machine Learning course by Stanford University professor Andrew Ng on Coursera platform. It is one of the best CS courses I have ever taken!

Watch these wonderful videos –

  • Classification – In classification we try to predict if the given test entity belongs to a specific class or not based on the training set we use to train the algorithm. Thus, classification is predicting a true/false value for an entity with a given set of features. For example, we use classification to determine if the given email is a SPAM or not. The mail is checked for various features such as presence of certain words in its contents, the sender etc. to determine f it can be classified as a SPAM or not. It can also be used to detect credit card frauds, detecting if tumour is malignant or not and many such classification problems.

  • Regression – Regression is used to predict a real numeric value outcomes. It can be used to predict sales figures, number of customers for the business based on the training set we use to train the algorithm. The training set examples contain features that denote factors that are most likely to have effect on the outcome. For example, to predict selling price of the house, its total built-up area would be one of the most important features.

  • Clustering – Unlike Classification and Regression, clustering is an unsupervised ML algorithm. In clustering, we try to group entities with similar features. For example,clustering can be used to determine the locations of telephone towers so that all users receive optimum signals. We may also use clustering to group products or customers where we may not have established categorization.

  • Recommendation – Recommendation is used to recommend an item to a user based on his previous usage/purchases or preferences of similar users. For example, it can be used on online shopping sites such as Amazon to recommend new books or items to a user. Netflix uses it to recommend movies to their customers.

Link

Exploring Images using Principal Component Analysis (PCA)

I contemplated using Principal Component Analysis (PCA) for one of my recent projects in Machine Learning (ML) with Python as we were trying to figure out and eliminate some redundant features in our data.

As it turned out, PCA wasn’t useful for doing what we were trying to do and we had to use another algorithm for feature elimination. Nevertheless, it allowed me to dig deeper in PCA and figure out how it exactly works.

My key leanings from the exercise about PCA – PCA can reduce dimensions, but not eliminate them. PCA doesn’t eliminate dimensions and keeps others from the original data. It transforms your data in a number of dimensions whose data are completely different from the original ones. And that made us chose another algorithm for dealing with our problem which needed us to eliminate few few features and run Logistic Regression (LR) on that.

Remembered all that while reading this post showing beautiful visualization of PCA extracting features from photos. It is wonderful to read about recreating those images by reducing components. The image breaking down is done with faces and he has chosen fashion to illustrate PCA. Brilliant!!! It even contains link to code on GitHub. 🙂

Definitely worth reading if you are interested in Principal Component Analysis (PCA), Eigenvalues or Machine Learning (ML)!

Principal Component Analysis and Fashion

Principal Component Analysis and Fashion

http://blog.thehackerati.com/post/126701202241/eigenstyle