Category: tools

calmcode.io > video tutorials for open source tools

calmcode.io > video tutorials for open source tools

calmcode.io is an e-learning platform that I really really really recommend to programmers and data scientists:

It is free.

It involves open source tools.

It uses bite-sized tutorial videos.

It explains tools clearly.

It explains everything calmly.

There’s tons of content about computer programming, data science, and personal productivity.

On top of this all, it’s by Vincent Warmerdam, and everything he touches seems to turn to gold.

Check it out here: https://calmcode.io/

You can subscribe to their newsletter here, to receive new content fresh from the presses, straight in your inbox!

There are just so many tutorials on so many different topics! Here are some quick glances at some topics and tools:

Blockchain 101: A visual demo

Blockchain 101: A visual demo

Good teachers are rare, so when you find one, cherish him or her.

Anders Brownworth is an exemplar teacher. I found this tutorial by his hand on what constitutes a blockchain, and it is by far the best explanation of the concept(s) I have seen this far.

Anders breaks down the material for total newbies, explaining one concept at a time. You are taken from a hash, to a block, to a blockchain, to distribution, tokens, and a coinbase.

Great work Anders, and too good of a resource not to share!

Now, hold on to your hat, as you can access this blockchain application yourself and play around with the concepts like Anders does in the video.

Via https://andersbrownworth.com/blockchain/hash

Computerphile on Cyber Security

Computerphile on Cyber Security

Computerphile is a Youtube sister channel of Numberphile. Where Numberphile’s videos are about the magic behind match and numbers, Computerphile’s videos are all about computers and computer stuff. I recommend both channels in general, and have watched many of their videos already.

Yet, over the past weeks I specifically enjoyed what seems to be several series of videos on Cyber Security related topics.

What makes a good password?

One series is all about passwords.

What are strong passwords, which are bad? How can hackers crack yours? And how do websites secure user passwords?

The videos below are in somewhat of the right order and they make for an interesting insight in the world of password management. They give you advice on how to pick you password, and even a nice tool to check whether your password has ever been leaked.

Probably, you will want to change your password afterwards!

Hacking and attacking

If you are up to no good, please do not watch this second series, which revolves all around hacks and computer attacks.

How do people get access to a websites database? How can we prevent it? How can we recognize security dangers?

You might know of SQL injections, but do you know what a slow loris attack is? Or how ransomware works? Or what exploitX is?

These videos nicely continue the line of a previous post on Try Hack Me’s Cyber Security┬áChallenges, where you can learn how computers work and where there vulnerabilities lie.

Awesome R Shiny Resources & Extensions

Awesome R Shiny Resources & Extensions

Rob Gilmore curates a github repo listing resources for working with Shiny, the R web framework and dashboarding tool.

Nan Xiao curates a second repository, listing awesome R packages offer that extensions to Shiny, like extended UI or server components.

They should be your go-to resources when looking for anything Shiny!

Shiny Resources

Extensions

Color curves: Choose a color palette with gradient

Color curves: Choose a color palette with gradient

Jan-Willem Tulp pointed out this amazing tool to choose a color palette: https://colorcurves.app

You can choose between either a continuous palette or a discrete palette, with groups that is.

Here’s an example of an exponential color curve for a continuous palette using colorcurves.app:

There are numerous functions you can use to make your “gradient color curve“.

Similarly, you can specify the lightness of the different colors along your curve.

Here’s another example, of an color arc for a categorical / discrete palette using colorcurves.app:

ppsr: An R implementation of the Predictive Power Score

ppsr: An R implementation of the Predictive Power Score

Update March, 2021: My R package for the predictive power score (ppsr) is live on CRAN!
Try install.packages("ppsr") in your R terminal to get the latest version.

A few months ago, I wrote about the Predictive Power Score (PPS): a handy metric to quickly explore and quantify the relationships in a dataset.

As a social scientist, I was taught to use a correlation matrix to describe the relationships in a dataset. Yet, in my opinion, the PPS provides three handy advantages:

  1. PPS works for any type of data, also nominal/categorical variables
  2. PPS quantifies non-linear relationships between variables
  3. PPS acknowledges the asymmetry of those relationships

Florian Wetschoreck came up with the PPS idea, wrote the original blog, and programmed a Python implementation of it (called ppscore).

Yet, I work mostly in R and I was very keen on incorporating this powertool into my general data science workflow.

So, over the holiday period, I did something I have never done before: I wrote an R package!

It’s called ppsr and you can find the code here on github.

Installation

# You can get the official version from CRAN:
install.packages("ppsr")

## Or you can get the development version from GitHub:
# install.packages('devtools')
# devtools::install_github('https://github.com/paulvanderlaken/ppsr')

Usage

The ppsr package has three main functions that compute PPS:

  • score() – which computes an x-y PPS
  • score_predictors() – which computes X-y PPS
  • score_matrix() – which computes X-Y PPS

Visualizing PPS

Subsequently, there are two main functions that wrap around these computational functions to help you visualize your PPS using ggplot2:

  • visualize_predictors() – producing a barplot of all X-y PPS
  • visualize_matrix() – producing a heatmap of all X-Y PPS
PPS matrix for iris

Note that Species is a nominal/categorical variable, with three character/text options.

A correlation matrix would not be able to show us that the type of iris Species can be predicted extremely well by the petal length and width, and somewhat by the sepal length and width. Yet, particularly sepal width is not easily predicted by the type of species.

Correlation matrix for iris

Exploring mtcars

It takes about 10 seconds to run 121 decision trees with visualize_matrix(mtcars). Yet, the output is much more informative than the correlation matrix:

  • cyl can be much better predicted by mpg than the other way around
  • the classification of vs can be done well using nearly all variables as predictors, except for am
  • yet, it’s hard to predict anything based on the vs classification
  • a cars’ am can’t be predicted at all using these variables
PPS matrix for mtcars

The correlation matrix does provides insights that are not provided by the PPS matrix. Most importantly, the sign and strength of any linear relationship that may exist. For instance, we can deduce that mpg relates strongly negatively with cyl.

Yet, even though half of the matrix does not provide any additional information (due to the symmetry), I still find it hard to derive the most important relations and insights at a first glance.

Moreover, the rows and columns for vs and am are not very informative in this correlation matrix as it contains pearson correlations coefficients by default, whereas vs and am are binary variables. The same can be said for cyl, gear and carb, which contain ordinal categories / integer data, so you can discuss the value of these coefficients depicted here.

Correlation matrix for mtcars

Exploring trees

In R, there are many datasets built in via the datasets package. Let’s explore some using the ppsr::visualize_matrix() function.

datasets::trees has data on 31 trees’ girth, height and volume.

visualize_matrix(datasets::trees) shows that both girth and volume can be used to predict the other quite well, but not perfectly.

Let’s have a look at the correlation matrix.

The scores here seem quite higher in general. A near perfect correlation between volume and girth.

Is it near perfect though? Let’s have a look at the underlying data and fit a linear model to it.

You will still be pretty far off the real values when you use a linear model based on Girth to predict Volume. This is what the original PPS of 0.65 tried to convey.

Actually, I’ve run the math for this linaer model and the RMSE is still 4.11. Using just the mean Volume as a prediction of Volume will result in 16.17 RMSE. If we map these RMSE values on a linear scale from 0 to 1, we would get the PPS of our linear model, which is about 0.75.

So, actually, the linear model is a better predictor than the decision tree that is used as a default in the ppsr package. That was used to generate the PPS matrix above.

Yet, the linear model definitely does not provide a perfect prediction, even though the correlation may be near perfect.

Conclusion

In sum, I feel using the general idea behind PPS can be very useful for data exploration.

Particularly in more data science / machine learning type of projects. The PPS can provide a quick survey of which targets can be predicted using which features, potentially with more complex than just linear patterns.

Yet, the old-school correlation matrix also still provides unique and valuable insights that the PPS matrix does not. So I do not consider the PPS so much an alternative, as much as a complement in the toolkit of the data scientist & researcher.

Enjoy the R package, or the Python module for that matter, and let me know if you see any improvements!