Author: Paul van der Laken

Visualizing decision tree partition and decision boundaries

Visualizing decision tree partition and decision boundaries

Grant McDermott developed this new R package I wish I had thought of: parttree

parttree includes a set of simple functions for visualizing decision tree partitions in R with ggplot2. The package is not yet on CRAN, but can be installed from GitHub using:

# install.packages("remotes")
remotes::install_github("grantmcdermott/parttree")

Using the familiar ggplot2 syntax, we can simply add decision tree boundaries to a plot of our data.

In this example from his Github page, Grant trains a decision tree on the famous Titanic data using the parsnip package. And then visualizes the resulting partition / decision boundaries using the simple function geom_parttree()

library(parsnip)
library(titanic) ## Just for a different data set
set.seed(123) ## For consistent jitter
titanic_train$Survived = as.factor(titanic_train$Survived)
## Build our tree using parsnip (but with rpart as the model engine)
ti_tree =
  decision_tree() %>%
  set_engine("rpart") %>%
  set_mode("classification") %>%
  fit(Survived ~ Pclass + Age, data = titanic_train)
## Plot the data and model partitions
titanic_train %>%
  ggplot(aes(x=Pclass, y=Age)) +
  geom_jitter(aes(col=Survived), alpha=0.7) +
  geom_parttree(data = ti_tree, aes(fill=Survived), alpha = 0.1) +
  theme_minimal()

Super awesome!

This visualization precisely shows where the trained decision tree thinks it should predict that the passengers of the Titanic would have survived (blue regions) or not (red), based on their age and passenger class (Pclass).

This will be super helpful if you need to explain to yourself, your team, or your stakeholders how you model works. Currently, only rpart decision trees are supported, but I am very much hoping that Grant continues building this functionality!

ML Model Degradation, and why work only just starts when you reach production

ML Model Degradation, and why work only just starts when you reach production

The assumption that a Machine Learning (ML) project is done when a trained model is put into production is quite faulty. Neverthless, according to Alexandre Gonfalonieri — artificial intelligence (AI) strategist at Philips — this assumption is among the most common mistakes of companies taking their AI products to market.

Actually, in the real world, we see pretty much the opposite of this assumption. People like Alexandre therefore strongly recommend companies keep their best data scientists and engineers on a ML project, especially after it reaches production!

Why?

If you’ve ever productionized a model and really started using it, you know that, over time, your model will start performing worse.

In order to maintain the original accuracy of a ML model which is interacting with real world customers or processes, you will need to continuously monitor and/or tweak it!

In the best case, algorithms are retrained with each new data delivery. This offers a maintenance burden that is not fully automatable. According to Alexandre, tending to machine learning models demands the close scrutiny, critical thinking, and manual effort that only highly trained data scientists can provide.

This means that there’s a higher marginal cost to operating ML products compared to traditional software. Whereas the whole reason we are implementing these products is often to decrease (the) costs (of human labor)!

What causes this?

Your models’ accuracy will often be at its best when it just leaves the training grounds.

Building a model on relevant and available data and coming up with accurate predictions is a great start. However, for how long do you expect those data — that age by the day — continue to provide accurate predictions?

Chances are that each day, the model’s latent performance will go down.

This phenomenon is called concept drift, and is heavily studied in academia but less often considered in business settings. Concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways.

In simpler terms, your model is no longer modelling the outcome that it used to model. This causes problems because the predictions become less accurate as time passes.

Particularly, models of human behavior seem to suffer from this pitfall.

The key is that, unlike a simple calculator, your ML model interacts with the real world. And the data it generates and that reaches it is going to change over time. A key part of any ML project should be predicting how your data is going to change over time.

Read more about concept drift here.

Via

How do we know when our models fail?

You need to create a monitoring strategy before reaching production!

According to Alexandre, as soon as you feel confident with your project after the proof-of-concept stage, you should start planning a strategy for keeping your models up to date.

How often will you check in?

On the whole model, or just some features?

What features?

In general, sensible model surveillance combined with a well thought out schedule of model checks is crucial to keeping a production model accurate. Prioritizing checks on the key variables and setting up warnings for when a change has taken place will ensure that you are never caught by a surprise by a change to the environment that robs your model of its efficacy.

Alexandre via

Your strategy will strongly differ based on your model and your business context.

Moreover, there are many different types of concept drift that can affect your models, so it should be a key element to think of the right strategy for you specific case!

Image result for concept drift
Different types of model drift (via)

Let’s solve it!

Once you observe degraded model performance, you will need to redesign your model (pipeline).

One solution is referred to as manual learning. Here, we provide the newly gathered data to our model and re-train and re-deploy it just like the first time we build the model. If you think this sounds time-consuming, you are right. Moreover, the tricky part is not refreshing and retraining a model, but rather thinking of new features that might deal with the concept drift.

A second solution could be to weight your data. Some algorithms allow for this very easily. For others you will need to custom build it in yourself. One recommended weighting schema is to use the inversely proportional age of the data. This way, more attention will be paid to the most recent data (higher weight) and less attention to the oldest of data (smaller weight) in your training set. In this sense, if there is drift, your model will pick it up and correct accordingly.

According to Alexandre and many others, the third and best solution is to build your productionized system in such a way that you continuously evaluate and retrain your models. The benefit of such a continuous learning system is that it can be automated to a large extent, thus reducing (the human labor) maintance costs.

Although Alexandre doesn’t expand on how to do these, he does formulate the three steps below:

Via the original blog

In my personal experience, if you have your model retrained (automatically) every now and then, using a smart weighting schema, and keep monitoring the changes in the parameters and for several “unit-test” cases, you will come a long way.

If you’re feeling more adventureous, you could improve on matters by having your model perform some exploration (at random or rule-wise) of potential new relationships in your data (see for instance multi-armed bandits). This will definitely take you a long way!

Solving concept drift (via)
How to standardize group colors in data visualizations in R

How to standardize group colors in data visualizations in R

One best practice in visualization is to make your color scheme consistent across figures.

For instance, if you’re making multiple plots of the dataset — say a group of 5 companies — you want to have each company have the same, consistent coloring across all these plots.

R has some great data visualization capabilities. Particularly the ggplot2 package makes it so easy to spin up a good-looking visualization quickly.

The default in R is to look at the number of groups in your data, and pick “evenly spaced” colors across a hue color wheel. This looks great straight out of the box:

# install.packages('ggplot2')
library(ggplot2)

theme_set(new = theme_minimal()) # sets a default theme

set.seed(1) # ensure reproducibility

# generate some data
n_companies = 5
df1 = data.frame(
  company = paste('Company', seq_len(n_companies), sep = '_'),
  employees = sample(50:500, n_companies),
  stringsAsFactors = FALSE
)

# make a simple column/bar plot
ggplot(data = df1) + 
  geom_col(aes(x = company, y = employees, fill = company))

However, it can be challenging is to make coloring consistent across plots.

For instance, suppose we want to visualize a subset of these data points.

index_subset1 = c(1, 3, 4, 5) # specify a subset

# make a plot using the subsetted dataframe
ggplot(data = df1[index_subset1, ]) + 
  geom_col(aes(x = company, y = employees, fill = company))

As you can see the color scheme has now changed. With one less group / company, R now picks 4 new colors evenly spaced around the color wheel. All but the first are different to the original colors we had for the companies.

One way to deal with this in R and ggplot2, is to add a scale_* layer to the plot.

Here we manually set Hex color values in the scale_fill_manual function. These hex values I provided I know to be the default R values for four groups.

# install.packages('scales')

# the hue_pal function from the scales package looks up a number of evenly spaced colors
# which we can save as a vector of character hex values
default_palette = scales::hue_pal()(5)

# these colors we can then use in a scale_* function to manually override the color schema
ggplot(data = df1[index_subset1, ]) +
  geom_col(aes(x = company, y = employees, fill = company)) +
  scale_fill_manual(values = default_palette[-2]) # we remove the element that belonged to company 2

As you can see, the colors are now aligned with the previous schema. Only Company 2 is dropped, but all other companies retained their color.

However, this was very much hard-coded into our program. We had to specify which company to drop using the default_palette[-2].

If the subset changes, which often happens in real life, our solution will break as the values in the palette no longer align with the groups R encounters:

index_subset2 = c(1, 2, 5) # but the subset might change

# and all manually-set colors will immediately misalign
ggplot(data = df1[index_subset2, ]) +
  geom_col(aes(x = company, y = employees, fill = company)) +
  scale_fill_manual(values = default_palette[-2])

Fortunately, R is a smart language, and you can work your way around this!

All we need to do is created, what I call, a named-color palette!

It’s as simple as specifying a vector of hex color values! Alternatively, you can use the grDevices::rainbow or grDevices::colors() functions, or one of the many functions included in the scales package

# you can hard-code a palette using color strings
c('red', 'blue', 'green')

# or you can use the rainbow or colors functions of the grDevices package
rainbow(n_companies)
colors()[seq_len(n_companies)]

# or you can use the scales::hue_pal() function
palette1 = scales::hue_pal()(n_companies)
print(palette1)
[1] "#F8766D" "#A3A500" "#00BF7D" "#00B0F6" "#E76BF3"

Now we need to assign names to this vector of hex color values. And these names have to correspond to the labels of the groups that we want to colorize.

You can use the names function for this.

names(palette1) = df1$company
print(palette1)
Company_1 Company_2 Company_3 Company_4 Company_5
"#F8766D" "#A3A500" "#00BF7D" "#00B0F6" "#E76BF3"

But I prefer to use the setNames function so I can do the inititialization, assignment, and naming simulatenously. It’s all the same though.

palette1_named = setNames(object = scales::hue_pal()(n_companies), nm = df1$company)
print(palette1_named)
Company_1 Company_2 Company_3 Company_4 Company_5
"#F8766D" "#A3A500" "#00BF7D" "#00B0F6" "#E76BF3"

With this named color vector and the scale_*_manual functions we can now manually override the fill and color schemes in a flexible way. This results in the same plot we had without using the scale_*_manual function:

ggplot(data = df1) + 
  geom_col(aes(x = company, y = employees, fill = company)) +
  scale_fill_manual(values = palette1_named)

However, now it does not matter if the dataframe is subsetted, as we specifically tell R which colors to use for which group labels by means of the named color palette:

# the colors remain the same if some groups are not found
ggplot(data = df1[index_subset1, ]) + 
  geom_col(aes(x = company, y = employees, fill = company)) +
  scale_fill_manual(values = palette1_named)
# and also if other groups are not found
ggplot(data = df1[index_subset2, ]) + 
  geom_col(aes(x = company, y = employees, fill = company)) +
  scale_fill_manual(values = palette1_named)

Once you are aware of these superpowers, you can do so much more with them!

How about highlighting a specific group?

Just set all the other colors to ‘grey’…

# lets create an all grey color palette vector
palette2 = rep('grey', times = n_companies)
palette2_named = setNames(object = palette2, nm = df1$company)
print(palette2_named)
Company_1 Company_2 Company_3 Company_4 Company_5
"grey" "grey" "grey" "grey" "grey"
# this looks terrible in a plot
ggplot(data = df1) + 
  geom_col(aes(x = company, y = employees, fill = company)) +
  scale_fill_manual(values = palette2_named)

… and assign one of the company’s colors to be a different color

# override one of the 'grey' elements using an index by name
palette2_named['Company_2'] = 'red'
print(palette2_named)
Company_1 Company_2 Company_3 Company_4 Company_5
"grey" "red" "grey" "grey" "grey"
# and our plot is professionally highlighting a certain group
ggplot(data = df1) + 
  geom_col(aes(x = company, y = employees, fill = company)) +
  scale_fill_manual(values = palette2_named)

We can apply these principles to other types of data and plots.

For instance, let’s generate some time series data…

timepoints = 10
df2 = data.frame(
  company = rep(df1$company, each = timepoints),
  employees = rep(df1$employees, each = timepoints) + round(rnorm(n = nrow(df1) * timepoints, mean = 0, sd = 10)),
  time = rep(seq_len(timepoints), times = n_companies),
  stringsAsFactors = FALSE
)

… and visualize these using a line plot, adding the color palette in the same way as before:

ggplot(data = df2) + 
  geom_line(aes(x = time, y = employees, col = company), size = 2) +
  scale_color_manual(values = palette1_named)

If we miss one of the companies — let’s skip Company 2 — the palette makes sure the others remained colored as specified:

ggplot(data = df2[df2$company %in% df1$company[index_subset1], ]) + 
  geom_line(aes(x = time, y = employees, col = company), size = 2) +
  scale_color_manual(values = palette1_named)

Also the highlighted color palete we used before will still work like a charm!

ggplot(data = df2) + 
  geom_line(aes(x = time, y = employees, col = company), size = 2) +
  scale_color_manual(values = palette2_named)

Now, let’s scale up the problem! Pretend we have not 5, but 20 companies.

The code will work all the same!

set.seed(1) # ensure reproducibility

# generate new data for more companies
n_companies = 20
df1 = data.frame(
  company = paste('Company', seq_len(n_companies), sep = '_'),
  employees = sample(50:500, n_companies),
  stringsAsFactors = FALSE
)

# lets create an all grey color palette vector
palette2 = rep('grey', times = n_companies)
palette2_named = setNames(object = palette2, nm = df1$company)

# highlight one company in a different color
palette2_named['Company_2'] = 'red'
print(palette2_named)

# make a bar plot
ggplot(data = df1) + 
  geom_col(aes(x = company, y = employees, fill = company)) +
  scale_fill_manual(values = palette2_named) +
  theme(axis.text.x = element_text(angle = 45, hjust = 1, vjust = 1)) # rotate and align the x labels

Also for the time series line plot:

timepoints = 10
df2 = data.frame(
  company = rep(df1$company, each = timepoints),
  employees = rep(df1$employees, each = timepoints) + round(rnorm(n = nrow(df1) * timepoints, mean = 0, sd = 10)),
  time = rep(seq_len(timepoints), times = n_companies),
  stringsAsFactors = FALSE
)

ggplot(data = df2) + 
  geom_line(aes(x = time, y = employees, col = company), size = 2) +
  scale_color_manual(values = palette2_named)

The possibilities are endless; the power is now yours!

Just think at the efficiency gain if you would make a custom color palette, with for instance your company’s brand colors!

For more R tricks to up your programming productivity and effectiveness, visit the R tips and tricks page!

paletteer: Hundreds of color palettes in R

paletteer: Hundreds of color palettes in R

Looking for just the right colors for your data visualization?

I often cover tools to pick color palettes on my website (e.g. here, here, or here) and also host a comprehensive list of color packages in my R programming resources overview.

However, paletteer is by far my favorite package for customizing your colors in R!

The paletteer package offers direct access to 1759 color palettes, from 50 different packages!

After installing and loading the package, paletteer works as easy as just adding one additional line of code to your ggplot:

install.packages("paletteer")
library(paletteer)

install.packages("ggplot2")
library(ggplot2)

ggplot(iris, aes(Sepal.Length, Sepal.Width, color = Species)) +
geom_point() +
scale_color_paletteer_d("nord::aurora")

paletteer offers a combined collection of hundreds of other color palettes offered in the R programming environment, so you are sure you will find a palette that you like! Here’s the list copied below, but this github repo provides more detailed information about the package contents.

NameGithubCRAN
awtoolsawhstin/awtools – 0.2.1
basethemeKKPMW/basetheme – 0.1.20.1.2
calecopalan-bui/calecopal – 0.1.0
cartographyriatelab/cartography – 2.2.1.12.2.1
colorblindrclauswilke/colorblindr – 0.1.0
colRozjacintak/colRoz – 0.2.2
dichromat2.0-0
DresdenColorkatiesaund/DresdenColor – 0.0.0.9000
dutchmastersEdwinTh/dutchmasters – 0.1.0
fishualizenschiett/fishualize – 0.2.9990.1.0
gameofthronesaljrico/gameofthrones – 1.0.11.0.0
ggpomologicalgadenbuie/ggpomological – 0.1.2
ggsciroad2stat/ggsci – 2.92.9
ggthemesjrnold/ggthemes – 4.2.04.2.0
ggthemrcttobin/ggthemr – 1.1.0
ghibliewenme/ghibli – 0.3.0.90000.3.0
grDevices2.0-14
harrypotteraljrico/harrypotter – 2.1.02.1.0
IslamicArtlambdamoses/IslamicArt – 0.1.0
jcolorsjaredhuling/jcolors – 0.0.40.0.4
LaCroixColoRjohannesbjork/LaCroixColoR – 0.1.0
lisatyluRp/lisa – 0.1.1.90000.1.1
MapPalettesdisarm-platform/MapPalettes – 0.0.2
miscpalettesEmilHvitfeldt/miscpalettes – 0.0.0.9000
nationalparkcolorskatiejolly/nationalparkcolors – 0.1.0
NineteenEightyRm-clark/NineteenEightyR – 0.1.0
nordjkaupp/nord – 1.0.01.0.0
ochReropenscilabs/ochRe – 1.0.0
oompaBase3.2.9
palettesForRfrareb/palettesForR – 0.1.20.1.2
palettetowntimcdlucas/palettetown – 0.1.1.900000.1.1
palrAustralianAntarcticDivision/palr – 0.1.00.1.0
palskwstat/pals – 1.61.6
PNWColorsjakelawlor/PNWColors – 0.1.0
Polychrome1.2.3
rcartocolorNowosad/rcartocolor – 2.0.02.0.0
RColorBrewer1.1-2
Redmonder0.2.0
RSkittleBreweralyssafrazee/RSkittleBrewer – 1.1
scicothomasp85/scico – 1.1.01.1.0
tidyquantbusiness-science/tidyquant – 0.5.80.5.8
trekcolorsleonawicz/trekcolors – 0.1.20.1.1
tvthemesRyo-N7/tvthemes – 1.1.01.1.0
uniknhneth/unikn – 0.2.0.90030.2.0
vapeplotseasmith/vapeplot – 0.1.0
vapoRwavemoldach/vapoRwave – 0.0.0.9000
viridissjmgarnier/viridis – 0.5.10.5.1
visiblym-clark/visibly – 0.2.6
werpalssciencificity/werpals – 0.1.0
wesandersonkarthik/wesanderson – 0.3.6.90000.3.6
yarrrndphillips/yarrr – 0.1.60.1.5
Via the paletteer github page

Let me know what you like about the package and do share any beautiful data visualizations you create with it!

Simulating Corona Virus Outbreaks – with and without social distancing

Simulating Corona Virus Outbreaks – with and without social distancing

I don’t want to participate in the general debate on COVID19 as there are enough, much more knowledgeable experts doing so already.

However, I did want to share something that sparked my interest: this great article by the Washington Post where they show the importance of social distancing in case of viral outbreaks with four simple simulations:

  1. Regular viral outbreak
  2. Viral outbreak with forced (temporary) quarantaine
  3. Viral outbreak with moderate social distancing
  4. Viral outbreak with extensive social distancing

While these are obviously much oversimplified models of reality, the results convey a powerful and very visual message showing the importance of our social behavior in such a crisis.

1. Simulation of regular viral outbreak
2. Simulation with temporary quarantaine opening up.

As these simulations are randomized, you will get your own personalized results when you read the article! Try it out yourself: washingtonpost.com/graphics/2020/world/corona-simulator/?tid=

A comparison of my results:

AutoML-Zero: Evolving Machine Learning Algorithms From Scratch

AutoML-Zero: Evolving Machine Learning Algorithms From Scratch

Google Brain researchers published this amazing paper, with accompanying GIF where they show the true power of AutoML.

AutoML stands for automated machine learning, and basically refers to an algorithm autonomously building the best machine learning model for a given problem.

This task of selecting the best ML model is difficult as it is. There are many different ML algorithms to choose from, and each of these has many different settings ([hyper]parameters) you can change to optimalize the model’s predictions.

For instance, let’s look at one specific ML algorithm: the neural network. Not only can we try out millions of different neural network architectures (ways in which the nodes and lyers of a network are connected), but each of these we can test with different loss functions, learning rates, dropout rates, et cetera. And this is only one algorithm!

In their new paper, the Google Brain scholars display how they managed to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. Using evolutionary principles, they have developed an AutoML framework that tailors its own algorithms and architectures to best fit the data and problem at hand.

This is AI research at its finest, and the results are truly remarkable!

GIF for the interpretation of the best evolved algorithm

You can read the full paper open access here: https://arxiv.org/abs/2003.03384 (quick download link)

The original code is posted here on github: github.com/google-research/google-research/tree/master/automl_zero#automl-zero

GIF for the experiment progress
Solutions to working with small sample sizes

Solutions to working with small sample sizes

Both in science and business, we often experience difficulties collecting enough data to test our hypotheses, either because target groups are small or hard to access, or because data collection entails prohibitive costs.

Such obstacles may result in data sets that are too small for the complexity of the statistical model needed to answer the questions we’re really interested in.

Several scholars teamed up and wrote this open access book: Small Sample Size Solutions.

This unique book provides guidelines and tools for implementing solutions to issues that arise in small sample studies. Each chapter illustrates statistical methods that allow researchers and analysts to apply the optimal statistical model for their research question when the sample is too small.

This book will enable anyone working with data to test their hypotheses even when the statistical model required for answering their questions are too complex for the sample sizes they can collect. The covered statistical models range from the estimation of a population mean to models with latent variables and nested observations, and solutions include both classical and Bayesian methods. All proposed solutions are described in steps researchers can implement with their own data and are accompanied with annotated syntax in R.

You can access the book for free here!