Tag: researchdesign

A/B Testing a New Look

A/B Testing a New Look

This WordPress blogger I came across — let’s call him “John” for now — has a very peculiar way of testing out his looks. Using dating-apps like Tinder,
John conducted A/B-tests to find out whether people would prefer him romantically with or without a beard. 

Via a proper experimental setup, John found out that bearded John receives much more attention in the form of Tinder matches. However, not from girls whom John characterized as being asian, that group seemed to prefer shaven John. 

While the sample size was not too large (Nbearded = 500; Nshaven = 500) and the response rate even lower (Nbearded = 64; Nshaven = 30), this seems like a fun way to make your look more data-driven!

Read more on “John”‘s orginal blog below:

https://appsciencing.wordpress.com/2018/11/19/beard-studies/

12 Guidelines for Effective A/B Testing

12 Guidelines for Effective A/B Testing

I wrote about Emily Robinson and her A/B testing activities at Etsy before, but now she’s back with a great new blog full of practical advice: Emily provides 12 guidelines for A/B testing that help to setup effective experiments and mitigate data-driven but erroneous conclusions:

  1. Have one key metric for your experiment.
  2. Use that key metric do a power calculation.
  3. Run your experiment for the length you’ve planned on.
  4. Pay more attention to confidence intervals than p-values.
  5. Don’t run tons of variants.
  6. Don’t try to look for differences for every possible segment.
  7. Check that there’s not bucketing skew.
  8. Don’t overcomplicate your methods.
  9. Be careful of launching things because they “don’t hurt”.
  10. Have a data scientist/analyst involved in the whole process.
  11. Only include people in your analysis who could have been affected by the change.
  12. Focus on smaller, incremental tests that change one thing at a time.

More details regarding each guideline you can read in Emily’s original blogpost.

In her blog, Emily also refers to a great article by Stephen Holiday discussing five online experiments that had (almost) gone wrong and a presentation by Dan McKinley on continuous experimentation.