Category: visualization

Animated Machine Learning Classifiers

Animated Machine Learning Classifiers

Ryan Holbrook made awesome animated GIFs in R of several classifiers learning a decision rule boundary between two classes. Basically, what you see is a machine learning model in action, learning how to distinguish data of two classes, say cats and dogs, using some X and Y variables.

These visuals can be great to understand these algorithms, the models, and their learning process a bit better.

Here’s the original tweet, with the logistic regression animation. If you follow it, you will find a whole thread of classifier GIFs. These I extracted, pasted, and explained below.

Below is the GIF which I extracted using EZgif.com.

What you see is observations from two classes, say cats and dogs, each represented using colored dots. The dots are placed along X and Y axes, which represent variables about the observations. Their tail lengths and their hairyness, for instance.

Now there’s an optimal way to seperate these classes, which is the dashed line. That line best seperates the cats from the dogs based on these two variables X and Y. As this is an optimal boundary given this data, it is stable, it does not change.

However, there’s also a solid black line, which does change. This line represents the learned boundary by the machine learning model, in this case using logistic regression. As the model is shown more data, it learns, and the boundary is updated. This learned boundary represents the best line with which the model has learned to seperate cats from dogs.

Anything above the boundary is predicted to be class 1, a dog. Everything below predicted to be class 2, a cat. As logistic regression results in a linear model, the seperation boundary is very much linear/straight.

Logistic regression gif by Ryan Holbrook

These animations are great to get a sense of how the models come to their boundaries in the back-end.

For instance, other machine learning models are able to use non-linear boundaries to dinstinguish classes, such as this quadratic discriminant analysis (qda). This “learned” boundary is much closer to the optimal boundary:

Quadratic discriminant analysis gif by Ryan Holbrook

Models using multivariate adaptive regression splines (or MARS) seem to result in multiple linear boundaries pasted together:

Multivariate adaptive regression splines gif by Ryan Holbrook

Next, we have the k-nearest neighbors algorithm, which predicts for each point (animal) the class (cat/dog) based on the “k” points closest to it. As you see, this results in a highly fluctuating, localized boundary.

K-nearest neighbors gif by Ryan Holbrook

Now, Ryan decided to push the challenge, and simulate new data for two classes with a more difficult decision boundary. The new data and optimal boundaries look like this:

The optimal decision boundary.
Via https://mathformachines.com/posts/decision/

On these data, Ryan put a whole range of non-linear models to work.

Like this support-vector machine, which tries to create optimal boundaries built of support vectors around all the cats and all the dohs (this is definitely not a technical, error-free explanation of what’s happening here).

Support vector machine gif by Ryan Holbrook

Generalized additive models are also cool to see in action. Why Ryan’s versions render so slowly, I don’t know. To learn more about GAMs, I strongly advise this tutorial here.

Generalized additive model gif by Ryan Holbrook

Let’s jump into some tree-based algorithms and the resulting models. A decision tree classifies data based on multiple, sequential, binary splits. Here, Ryan trained a simple decision tree:

Decision tree gif by Ryan Holbrook

As well as it’s big brother, a random forest, which uses hundreds of trees in the back end and thus results in a more flexible boundary:

Random forest gif by Ryan Holbrook

Extreme gradient boosting is also a tree-based algorithm, which leverages many machine learning techniques to optimize the bias-variance tradeoff. Here’s an earlier blog on how to get started with Xgboost in Python or R:

Extreme gradient boosting gif by Ryan Holbrook

Finally, a machine learning project is not complete without an artificial neural network. Learn more on these here:

Artificial neural network gif by Ryan Holbrook

If you want to know more about this project of Ryan Holbrook, do have a look at his accompanying blog here. You can also find Ryan’s code here on github.

Leonardo: Adaptive Color Palettes using Contrast-Ratio

Leonardo: Adaptive Color Palettes using Contrast-Ratio

Leonardo is an open source tool for creating adaptive color palettes; a custom color generator for creating colors based on target contrast ratio. Leonardo is delivered as a Javascript module (@adobe/leonardo-contrast-colors) with a web interface to aid in creating your color palette configurations, which can easily be shared with both designers and engineers. Simply put, Leonardo is for dynamic accessibility of your products.

Read all about Leonardo in this Medium blog post by its author.

The tool is very easy to use. Even I could create a quick palette! Though it’s probably horrendous (due to my colorblindness : ))

Top-19 articles of 2019

Top-19 articles of 2019

With only one day remaining in 2019, let’s review the year. 2019 was my third year of blogging and it went by even quicker than the previous two!

Personally, it has been a busy year for me: I started a new job, increased my speaking and teaching activities, bought and moved to my new house, and got married op top of that!

Fortunately, I also started working parttime. This way, I could still reserve some time for learning and sharing my learnings. And sharing I did:

I posted 95 blogs in 2019!
That means one new post every 4 days!

paulvanderlaken.com improved its online footprint as well. We received over 100k visitors in 2019! And many of you subscribed and sticked around. Our little community now includes 55 more members than it did last year! And that is not even including the followers to my new twitter bot Artificial Stupidity!

Thank you for your continued interest!

Join 262 other followers

Now, I am always curious as to what brings you to my website, so let’s have a look at some 2019 statistics (which I downloaded via my new Python scraper).

Most read articles

There is clearly a power distribution in the quantity with which you read my blogs.

Some blogs consistently attract dozens of visitors each day. Others have only handful of visitors over the course of a year.

These are the 19 articles which were most read in 2019. Hyperlinks are included below the bar chart. It’s a nice combination of R programming, machine learning, HR-related materials, and some entertainment (games & gambling) in between.

Which have and haven’t you read?

  1. R resources
  2. R tips and tricks
  3. New to R?
  4. Books for the modern, data-driven HR professional
  5. The house always wins
  6. Visualization innovations
  7. Simple correlation analysis in R
  8. Beating battleships with algorithms and AI
  9. Regular expressions in R
  10. Simpson’s paradox
  11. Visualizing the k-means clustering algorithm
  12. Survival of the best fit
  13. Datasets to practice and learn data science
  14. Identifying dirty twitter bots
  15. Game of Thrones map
  16. Screeps
  17. Northstar
  18. The difference between DS, ML, and AI visualized
  19. Light GBM vs. XGBoost

Rising stars

Half of these most read articles have actually been published in 2017 or ’18 already. However, of the 95 articles published in 2019, some also demonstrate promising visitor patterns:

The People Analytics books, Visual innovations, and AI Battleships are in the top 19, and several others made it too.

Some of these newer blogs haven’t had the time to mature and redeem their place yet though. Regardless, I have high hopes!

Particularly for Neural Synesthesia, which was easily one of my greatest WOW-moments for ML applications in 2019. It’s truly mesmerizing to see a GAN traverse its latent space.

Reading & posting patterns

I have been posting quite regularly throughout the year. Apart from a holiday to Thailand during the start of January, and the start of my new job in February.

While I write and post most of my blogs in the weekend, I guess I should consider postponing publishing. As you guys are mostly active during Tuesdays and Wedsnesdays!

Statistical summary of 2019

What better way to end 2019 than with a statistical summary?

I have posted more and shorter blogs, and you’ve rewarded me with visits and more likes (also per post). However, we need more discussion!

Statistic2018 2019 őĒ
Views85614107388+25%
Unique visitors5759470615+23%
Posts6195+56%
Words / post518371-40%
Likes51111118%
Comments2416-33%
As of 29/12/2019

2020 Outlook

It took some time to get started, but halfway 2017 my blog started attracting an audience. People stayed on during 2018, and visitor number continued to increase through 2019.

With an ongoing expansion from R into Python, and an increased focus on sharing resources, applications, and novelties related to data visualization and machine learning, I have a lot more in store for 2020!

I hope you stick around for the ride!

Please like, subscribe, share, and comment, and we’ll make sure 2020 will be at least as interesting and full of (machine) learning as 2019 has been!

Join 262 other followers

Turning the Traveling Salesman problem into Art

Turning the Traveling Salesman problem into Art

Robert Bosch is a professor of Natural Science at the department of Mathematics of Oberlin College and has found a creative way to elevate the travelling salesman problem to an art form.

For those who aren’t familiar with the travelling salesman problem (wiki), it is a classic algorithmic problem in the field of¬†computer science¬†and¬†operations research. Basically, we want are looking for a mathematical solution that is cheapest, shortest, or fastest for a given problem. Most commonly, it is seen as a¬†graph (network)¬†describing the locations of a set of¬†nodes (elements in that network). Wikipedia has a description I can’t improve on:

The Travelling Salesman Problem describes a salesman who must travel between N cities. The order in which he does so is something he does not care about, as long as he visits each once during his trip, and finishes where he was at first. Each city is connected to other close by cities, or nodes, by¬†airplanes, or by¬†road¬†or¬†railway. Each of those links between the cities has one or more weights (or the cost) attached. The cost describes how “difficult” it is to traverse this edge on the graph, and may be given, for example, by the cost of an airplane ticket or train ticket, or perhaps by the length of the edge, or time required to complete the traversal. The salesman wants to keep both the travel costs, as well as the¬†distance¬†he travels as low as possible.

Wikipedia

Here’s a visual representation of the problem and some algorithmic approaches to solving it:

Now, Robert Bosch has applied the traveling salesman problem to well-know art pieces, trying to redraw them by connecting a series of points with one continuous line. Robert even turned it into a challenge so people can test out how well their travelling salesman algorithms perform on, for instance, the Mona Lisa, or Vincent van Gogh.

Just look at the detail on these awesome Dutch classics:

Read more about this awesome project here: http://www.math.uwaterloo.ca/tsp/data/art/

P.S. Why do Brits and Americans have this spelling feud?! As a non-native, I never know what to pick. Should I write modelling or modeling, travelling or traveling, tomato or tomato? I got taught the U.K. style, but the U.S. style pops up whenever I google stuff, so I am constantly confused! Now I subconciously intertwine both styles in a single text…

OriginLab’s Graph Gallery: A blast from the past

OriginLab’s Graph Gallery: A blast from the past

Continuing my recent line of posts on data visualization resources, I found another repository in my inbox: OriginLab’s GraphGallery!

If I’m being honest, I would personally advice you to look at the dataviz project instead, if you haven’t heard of that one yet.

However, OriginLab might win in terms of sentiment. It has this nostalgic look of the ’90s, and apparently people really used it during that time. Nevertheless, despite looking old, the repo seems to be quite extensive, with nearly 400 different types of data visualizations:

Quantity isn’t everything though, as some of the 400 entries are disgustingly horrible:

There’s so much wrong with this graph…

What I do like about this OriginLab repo is that it has an option to sort its contents using a random order. This really facilitates discovery of new pearls:

Thanks to Maarten Lambrechts for sharing this resource on twitter a while back!

Visualizing Sampling Distributions in ggplot2: Adding area under the curve

Visualizing Sampling Distributions in ggplot2: Adding area under the curve

Thank you ggplot2tutor for solving one of my struggles. Apparently this is all it takes:

ggplot(NULL, aes(x = c(-3, 3))) +
  stat_function(fun = dnorm, geom = "line")

I can’t begin to count how often I have wanted to visualize a (normal) distribution in a plot. For instance to show how my sample differs from expectations, or to highlight the skewness of the scores on a particular variable. I wish I’d known earlier that I could just add one simple geom to my ggplot!

Want a different mean and standard deviation, just add a list to the args argument:

ggplot(NULL, aes(x = c(0, 20))) +
  stat_function(fun = dnorm,
                geom = "area",
                args = list(
                  mean = 10,
                  sd = 3
                ))

Need a different distribution? Just pass a different distribution function to stat_function. For instance, an F-distribution, with the df function:

ggplot(NULL, aes(x = c(0, 5))) +
  stat_function(fun = df,
                geom = "area",
                args = list(
                  df1 = 2,
                  df2 = 10
                ))

You can make it is complex as you want. The original ggplot2tutor blog provides this example:

ggplot(NULL, aes(x = c(-3, 5))) +
  stat_function(
    fun = dnorm,
    geom = "area",
    fill = "steelblue",
    alpha = .3
  ) +
  stat_function(
    fun = dnorm,
    geom = "area",
    fill = "steelblue",
    xlim = c(qnorm(.95), 4)
  ) +
  stat_function(
    fun = dnorm,
    geom = "line",
    linetype = 2,
    fill = "steelblue",
    alpha = .5,
    args = list(
      mean = 2
    )
  ) +
  labs(
    title = "Type I Error",
    x = "z-score",
    y = "Density"
  ) +
  scale_x_continuous(limits = c(-3, 5))

Have a look at the original blog here: https://ggplot2tutor.com/sampling_distribution/sampling_distribution/