Survival of the Best Fit is a webgame that simulates what happens when companies automate their recruitment and selection processes.

You – playing as the CEO of a starting tech company – are asked to select your favorite candidates from a line-up, based on their resumés.
As your simulated company grows, the time pressure increases, and you are forced to automate the selection process.
Fortunately, some smart techies working for your company propose training a computer to hire just like you just did.
They don’t need anything but the data you just generated and some good old supervised machine learning!
To avoid spoilers, try the game yourself and see what happens!
The game only takes a few minutes, and is best played on mobile.
Survival of the Best Fit was built by Gabor Csapo, Jihyun Kim, Miha Klasinc, and Alia ElKattan. They are software engineers, designers and technologists, advocating for better software that allows members of the public to question its impact on society.
You don’t need to be an engineer to question how technology is affecting our lives. The goal is not for everyone to be a data scientist or machine learning engineer, though the field can certainly use more diversity, but to have enough awareness to join the conversation and ask important questions.
With Survival of the Best Fit, we want to reach an audience that may not be the makers of the very technology that impact them everyday. We want to help them better understand how AI works and how it may affect them, so that they can better demand transparency and accountability in systems that make more and more decisions for us.
survivalofthebestfit.com
I found that the game provides a great intuitive explanation of how (humas) bias can slip into A.I. or machine learning applications in recruitment, selection, or other human resource management practices and processes.
If you want to read more about people analytics and machine learning in HR, I wrote my dissertation on the topic and have many great books I strongly recommend.
Finally, here’s a nice Medium post about the game.

Note, as Joachin replied below, that the game apparently does not learn from user-input, but is programmed to always result in bias towards blues.
I kind of hoped that there was actually an algorithm “learning” in the backend, and while the developers could argue that the bias arises from the added external training data (you picked either Google, Apple, or Amazon to learn from), it feels like a bit of a disappointment that there is no real interactivity here.
Here’s a fun fact, the game is complete BS. Regardless of the hiring decisions you make the game has the exact same outcome. Play through and hire ONLY “Blueville” residents and watch as it then tells you that you’re discriminating against “Blueville” residents because the AI is hiring more “Orangeville” residents based on your prior hiring practices.
This isn’t a game, because games have different outcomes based on the player’s input. This is propaganda at best.
LikeLike
Hi Joaquin, I wasn’t aware of this. Now that I replayed it, selecting only blues, I indeed get to the same results. That’s too bad – I thought they had really used the user inputs.
Still, I like the game (or interactive webpage, if you prefer not to call it a game) for its educational value in providing insights into how a general machine learning cycle operates, and what can go wrong. I think, for many, this webpage may be an eyeopener, and a welcome voice to counter the consultants praising AI/ML as a solutions to everything.
Sorry to hear you don’t feel the same way. Let’s agree to disagree, though feel free to elaborate and try and convince me 🙂
LikeLike
It’s true, no matter what you select, the result is always the same so it’s not a game but a demo, presentation showing that automation based only on historical data, without learning by experience, can cause bias faster and lead to the unexpected end of story… ML is not like LEGO.
LikeLike
Hi Joaquin!
I’m one of the people who worked on the game. It does interact with users’ input, but is also a bit hardcoded to reach the same educational message at the end if users select all (or majority) blue applicants. We’ve just publicly launched last week, and have been in conversation since to figure out better ways to communicate the message while not disappointing users/being more interactive – but for a start, we figured it’s more important for a new user to understand the point than for us to respond to their specific hires. As Paul said, the bias in this case ends up being because of the larger, historical data set selected (which in the back-end is, similarly, a very large and biased data set that the program uses). However, we realize we need to adjust the ending to better clarify where the bias is coming from, and reflect the changes in the users’ decisions in the first part in a more satisfying way.
We’re still working on this, so if you have any feedback, please feel free to add an issue on our Github page: https://github.com/survivalofthebestfit/survivalofthebestfit/issues or send any feedback/suggestions to survivalofthebestfit@gmail.com! 🙂
LikeLiked by 1 person
Hey, just came across this – thanks so much for the write-up! We really appreciate you sharing it 🙂
LikeLiked by 1 person
Hi, I think your site might be having browser compatibility issues.
When I look at your website in Firefox, it looks fine but when opening in Internet Explorer, it has
some overlapping. I just wanted to give you a quick heads up!
Other then that, great blog!
LikeLiked by 1 person
Thanks!
LikeLike
Hello, this weekend is nice in support of me, because this occasion i am
reading this enormous educational paragraph here at my house.
LikeLike