Tag: statistics

Is R-squared Useless?

Is R-squared Useless?

Coming from a social sciences background, I learned to use R-squared as a way to assess model performance and goodness of fit for regression models.

Yet, in my current day job, I nearly never use the metric any more. I tend to focus on predictive power, with metrics such as MAE, MSE, or RMSE. These make much more sense to me when comparing models and their business value, and are easier to explain to stakeholders as an added bonus.

I recently wrote about the predictive power score as an alternative to correlation analysis.

Are there similar alternatives that render R-squared useless? And why?

Here’s an interesting blog explaining the standpoints of Cosma Shalizi of Carnegie Mellon University:

  • R-squared does not measure goodness of fit.
  • R-squared does not measure predictive error.
  • R-squared does not allow you to compare models using transformed responses.
  • R-squared does not measure how one variable explains another.

I have never found a situation where R-squared helped at all.

Professor Cosma Shalizi (according to Clay Ford)

Bayesian Statistics using R, Python, and Stan

Bayesian Statistics using R, Python, and Stan

For a year now, this course on Bayesian statistics has been on my to-do list. So without further ado, I decided to share it with you already.

Richard McElreath is an evolutionary ecologist who is famous in the stats community for his work on Bayesian statistics.

At the Max Planck Institute for Evolutionary Anthropology, Richard teaches Bayesian statistics, and he was kind enough to put his whole course on Statistical Rethinking: Bayesian statistics using R & Stan open access online.

You can find the video lectures here on Youtube, and the slides are linked to here:

Richard also wrote a book that accompanies this course:

For more information abou the book, click here.

For the Python version of the code examples, click here.

How most statistical tests are linear models

How most statistical tests are linear models

Jonas Kristoffer Lindeløv wrote a great visual explanation of how the most common statistical tests (t-test, ANOVA, ANCOVA, etc) are all linear models in the back-end.

Jonas’ original blog uses R programming to visually show how the tests work, what the linear models look like, and how different approaches result in the same statistics.

George Ho later remade a Python programming version of the same visual explanation.

If I was thought statistics and methodology this way, I sure would have struggled less! Have a look yourself: https://lindeloev.github.io/tests-as-linear/

Data Science vs. Data Alchemy – by Lucas Vermeer

Data Science vs. Data Alchemy – by Lucas Vermeer

How do scurvy, astronomy, alchemy and data science relate to each other?

In this goto conference presentation, Lucas Vermeer — Director of Experimentation at Booking.com — uses some amazing storytelling to demonstrate how the value of data (science) is largely by organizations capability to gather the right data — the data they actually need.

It’s a definite recommendation to watch for data scientists and data science leaders out there.

Here are the slides, and they contain some great oneliners:

@lucasvermeer
@lucasvermeer
Visualizing and interpreting Cohen’s d effect sizes

Visualizing and interpreting Cohen’s d effect sizes

Cohen’s d (wiki) is a statistic used to indicate the standardised difference between two means. Resarchers often use it to compare the averages between groups, for instance to determine that there are higher outcomes values in a experimental group than in a control group.

Researchers often use general guidelines to determine the size of an effect. Looking at Cohen’s d, psychologists often consider effects to be small when Cohen’s d is between 0.2 or 0.3, medium effects (whatever that may mean) are assumed for values around 0.5, and values of Cohen’s d larger than 0.8 would depict large effects (e.g., University of Bath).

The two groups’ distributions belonging to small, medium, and large effects visualized

Kristoffer Magnusson hosts this Cohen’s d effect size comparison tool on his website the R Psychologist, but recently updated the visualization and its interactivity. And the tool looks better than ever:

Moreover, Kristoffer adds some nice explanatons of the numbers and their interpretation in real life situations:

If you find the tool useful, please consider buying Kristoffer a coffee or buying one of his beautiful posters, like the one above, or below:

Frequentisme betekenis testen poster horizontaal image 0

By the way, Kristoffer hosts many other interesting visualization tools (most made with JavaScript’s D3 library) on statistics and statistical phenomena on his website, have a look!

Determine optimal sample sizes for business value in A/B testing, by Chris Said

Determine optimal sample sizes for business value in A/B testing, by Chris Said

A/B testing is a method of comparing two versions of some thing against each other to determine which is better. A/B tests are often mentioned in e-commerce contexts, where the things we are comparing are web pages.

ab-testing
via optimizely.com/nl/optimization-glossary/ab-testing/

Business leaders and data scientists alike face a difficult trade-off when running A/B tests: How big should the A/B test be? Or in other words, After collecting how many data points, or running for how many days, should we make a decision whether A or B is the best way to go?

This is a tradeoff because the sample size of an A/B test determines its statistical power. This statistical power, in simple terms, determines the probability of a A/B test showing an effect if there is actually really an effect. In general, the more data you collect, the higher the odds of you finding the real effect and making the right decision.

By default, researchers often aim for 80% power, with a 5% significance cutoff. But is this general guideline really optimal for the tradeoff between costs and benefits in your specific business context? Chris thinks not.

Chris said wrote a great three-piece blog in which he explains how you can mathematically determine the optimal duration of A/B-testing in your own company setting:

Part I: General Overview. Starts with a mostly non-technical overview and ends with a section called “Three lessons for practitioners”.

Part II: Expected lift. A more technical section that quantifies the benefits of experimentation as a function of sample size.

Part III: Aggregate time-discounted lift. A more technical section that quantifies the costs of experimentation as a function of sample size. It then combines costs and benefits into a closed-form expression that can be optimized. Ends with an FAQ.

Chris Said (via)

Moreover, Chris provides three practical advices that show underline 80% statistical power is not always the best option:

  1. You should run “underpowered” experiments if you have a very high discount rate
  2. You should run “underpowered” experiments if you have a small user base
  3. Neverheless, it’s far better to run your experiment too long than too short
Simulations shows that for Chris’ hypothetical company and A/B test, 38 days would be the optimal period of time to gather data
via chris-said.io/2020/01/10/optimizing-sample-sizes-in-ab-testing-part-I/

Chris ran all his simulations in Python and shared the notebooks.