Tag: management

People Analytics vs. HR Analytics Google trends

People Analytics vs. HR Analytics Google trends

A few years back I completed my dissertation on data-driven Human Resource Management.

This specialized field is often dubbed HR analytics, for basically it’s the application of analytics to the topic of human resources.

Yet, as always in a specialized and hyped field, diifferent names started to emerge. The term People analytics arose, as did Workforce analytics, Talent analytics, and many others.

I addressed this topic in the introduction to my Ph.D. thesis and because I love data visualization, I decided to make a visual to go along with it.

So I gathered some Google Trends data, added a nice locally smoothed curve through it, and there you have it. As the original visual was so well received that it was even cited in this great handbook on HR analytics. With almost three years passed now, I decided it was time for an update. So here’s the 2021 version.

If you would compare this to the previous version, the trends look quite different. In the previous version, People Analytics had the dominant term since 2011 already.

Unfortunately, that’s not something I can help. Google indexes these search interest ratings behind the scenes, and every year or so, they change how they are calculated.

If you want to get such data yourself, have a look at the Google Trends project.


In my dissertation, I wrote the following on the topic:

This process of internally examining the impact of HRM activities goes by many different labels. Contemporary popular labels include people analytics (e.g., Green, 2017; Kane, 2015), HR analytics (e.g., Lawler, Levenson, & Boudreau, 2004; Levenson, 2005; Rasmussen & Ulrich, 2015; Paauwe & Farndale, 2017), workforce analytics (e.g., Carlson & Kavanagh, 2018; Hota & Ghosh, 2013; Simón & Ferreiro, 2017), talent analytics (e.g., Bersin, 2012; Davenport, Harris, & Shapiro, 2010), and human capital analytics (e.g.,
Andersen, 2017; Minbaeva, 2017a, 2017b; Levenson & Fink, 2017; Schiemann, Seibert, & Blankenship, 2017). Other variations including metrics or reporting are also common (Falletta, 2014) but there is consensus that these differ from the analytics-labels (Cascio & Boudreau, 2010; Lawler, Levenson, & Boudreau, 2004). While HR metrics would refer to descriptive statistics on a single construct, analytics involves exploring and quantifying relationships between multiple constructs.

Yet, even within analytics, a large variety of labels is used interchangeably. For instance, the label people analytics is favored in most countries globally, except for mainland Europe and India where HR analytics is used most (Google Trends, 2018). While human capital analytics seems to refer to the exact same concept, it is used almost exclusively in scientific discourse. Some argue that the lack of clear terminology is because
of the emerging nature of the field (Marler & Boudreau, 2017). Others argue that differences beyond semantics exist, for instance, in terms of the accountabilities the labels suggest, and the connotations they invoke (Van den Heuvel & Bondarouk, 2017). In practice, HR, human capital, and people analytics are frequently used to refer to analytical projects covering the entire range of HRM themes whereas workforce and talent analytics are commonly used with more narrow scopes in mind: respectively (strategic) workforce planning initiatives and analytical projects in recruitment, selection, and development. Throughout this dissertation, I will stick to the label people analytics, as this is leading label globally, and in the US tech companies, and thus the most likely label to which I
expect the general field to converge.

publicatie-online.nl/uploaded/flipbook/15810-v-d-laken/12/

Want to learn more about people analytics? Have a look at this reading list I compiled.

Artificial Stupidity – by Vincent Warmerdam @PyData 2019 London

Artificial Stupidity – by Vincent Warmerdam @PyData 2019 London

PyData is famous for it’s great talks on machine learning topics. This 2019 London edition, Vincent Warmerdam again managed to give a super inspiring presentation. This year he covers what he dubs Artificial Stupidity™. You should definitely watch the talk, which includes some great visual aids, but here are my main takeaways:

Vincent speaks of Artificial Stupidity, of machine learning gone HorriblyWrong™ — an example of which below — for which Vincent elaborates on three potential fixes:

Image result for paypal but still learning got scammed
Example of a model that goes HorriblyWrong™, according to Vincent’s talk.

1. Predict Less, but Carefully

Vincent argues you shouldn’t extrapolate your predictions outside of your observed sampling space. Even better: “Not predicting given uncertainty is a great idea.” As an alternative, we could for instance design a fallback mechanism, by including an outlier detection model as the first step of your machine learning model pipeline and only predict for non-outliers.

I definately recommend you watch this specific section of Vincent’s talk because he gives some very visual and intuitive explanations of how extrapolation may go HorriblyWrong™.

Be careful! One thing we should maybe start talking about to our bosses: Algorithms merely automate, approximate, and interpolate. It’s the extrapolation that is actually kind of dangerous.

Vincent Warmerdam @ Pydata 2019 London

Basically, we can choose to not make automated decisions sometimes.

2. Constrain thy Features

What we feed to our models really matters. […] You should probably do something to the data going into your model if you want your model to have any sort of fairness garantuees.

Vincent Warmerdam @ Pydata 2019 London

Often, simply removing biased features from your data does not reduce bias to the extent we may have hoped. Fortunately, Vincent demonstrates how to remove biased information from your variables by applying some cool math tricks.

Unfortunately, doing so will often result in a lesser predictive accuracy. Unsurprisingly though, as you are not closely fitting the biased data any more. What makes matters more problematic, Vincent rightfully mentions, is that corporate incentives often not really align here. It might feel that you need to pick: it’s either more accuracy or it’s more fairness.

However, there’s a nice solution that builds on point 1. We can now take the highly accurate model and the highly fair model, make predictions with both, and when these predictions differ, that’s a very good proxy where you potentially don’t want to make a prediction. Hence, there may be observations/samples where we are comfortable in making a fair prediction, whereas in most other situations we may say “right, this prediction seems unfair, we need a fallback mechanism, a human being should look at this and we should not automate this decision”.

Vincent does not that this is only one trick to constrain your model for fairness, and that fairness may often only be fair in the eyes of the beholder. Moreover, in order to correct for these biases and unfairness, you need to know about these unfair biases. Although outside of the scope of this specific topic, Vincent proposes this introduces new ethical issues:

Basically, we can choose to put our models on a controlled diet.

3. Constrain thy Model

Vincent argues that we should include constraints (based on domain knowledge, or common sense) into our models. In his presentation, he names a few. For instance, monotonicity, which implies that the relationship between X and Y should always be either entirely non-increasing, or entirely non-decreasing. Incorporating the previously discussed fairness principles would be a second example, and there are many more.

If we every come up with a model where more smoking leads to better health, that’s bad. I have enough domain knowledge to say that that should never happen. So maybe I should just make a system where I can say “look this one column with relationship to Y should always be strictly negative”.

Vincent Warmerdam @ Pydata 2019 London

Basically, we can integrate domain knowledge or preferences into our models.

Conclusion: Watch the talk!

People Analytics: Is nudging goed werkgeverschap of onethisch?

People Analytics: Is nudging goed werkgeverschap of onethisch?

In Dutch only:

Voor Privacyweb schreef ik onlangs over people analytics en het mogelijk resulterende nudgen van medewerkers: kleine aanpassingen of duwtjes die mensen in de goede richting zouden moeten sturen. Medewerkers verleiden tot goed gedrag, als het ware. Maar wie bepaalt dan wat goed is, en wanneer zouden werkgevers wel of niet mogen of zelfs moeten nudgen?

Lees het volledige artikel hier.

Books for the modern, data-driven HR professional (incl. People Analytics)

Books for the modern, data-driven HR professional (incl. People Analytics)

With great pleasure I’ve studied and worked in the field of people analytics, where we seek to leverage employee, management-, and business information to better organize and manage our personnel. Here, data has proven valuable itself indispensible for the organization of the future.

Data and analytics have not traditionally been high on the list of HR professionals. Fortunately, there is an increased awareness that the 21st century (HR) manager has to be data-savvy. But where to start learning? The plentiful available resources can be daunting…

Have a look at these 100+ amazing books
for (starting) people analytics specialists.
My personal recommendations are included as pictures,
but feel free to ask for more detailed suggestions!


Categories (clickable)

  • Behavioural Psychology: focus on behavioural psychology and economics, including decision-making and the biases therein.
  • Technology: focus on the implications of new technology….
    • Ethics: … on society and humanity, and what can go wrong.
    • Digital & Data-driven HR: … for the future of work, workforce, and organization. Includes people analytics case studies.
  • Management: focus on industrial and organizational psychology, HR, leadership, and business strategy.
  • Statistics: focus on the technical books explaining statistical concepts and applied data analysis.
    • People analytics: …. more technical books on how to conduct people analytics studies step-by-step in (statistical) software.
    • Programming: … technical books specifically aimed at (statistical) programming and data analysis.
  • Communication: focus on information exchange, presentation, and data visualization.

Disclaimer: This page contains one or more links to Amazon.
Any purchases made through those links provide us with a small commission that helps to host this blog.

Behavioural Psychology books

Jump back to Categories

Technology books

Jump back to Categories

Ethics in Data & Machine Learning

Jump back to Categories

Digital & Data-driven HR

Jump back to Categories

Management books

Jump back to Categories

Statistics books

Applied People Analytics

Programming

You can find an overview of 20+ free programming books here.

Jump back to Categories

Data Visualization books

Jump back to Categories


A note of thanks

I want to thank the active people analytics community, publishing in management journals, but also on social media. I knew Littral Shemer Haim already hosted a people analytics reading list, and so did Analytics in HR (Erik van Vulpen) and Workplaceif (Manoj Kumar). After Jared Valdron called for book recommendation on people analytics on LinkedIn, and nearly 60 people replied, I thought let’s merge these overviews.

Hence, a big thank you and acknowledgement to all those who’ve contributed directly or indirectly. I hope this comprehensive merged overview is helpful.

Join 1,405 other followers
Univers Interview: “Algorithms haven’t replaced the HR manager yet”

Univers Interview: “Algorithms haven’t replaced the HR manager yet”

The magazine of Tilburg University — Univers — recently interviewed me on my PhD research on People Analytics and data-driven Human Resource management. The Dutch write-up by interviewer Ron Vaessen you can find here, but is unfortunately available in Dutch only.

The full text of my dissertation can be accessed in a flipbook here or downloaded directly via this link.

I have also dedicated several blogs to more background information. A small extract on the ethics of people analytics and machine learning in HR I posted here. Those interested in visualizing survival curves like I did can see this post. Curious about the cover design, read this post

Checklist to Optimize Training Transfer in Organizations

Checklist to Optimize Training Transfer in Organizations

Ashley Hughes, Stephanie Zajac, Jacqueline Spencer, and Eduardo Salas wrote a recent research note for the International Journal of Training and Development. The research note is build around an evidence-based checklist of actionable insights for practitioners that will help to enhance the effectiveness of training interventions. These actionable insights would help to prevent ‘transfer problem’, meaning that trained skills are not being used on the job. 


Screenshot of the first page of the published research note, containing the abstract

Unfortunately, these published academic papers are often behind a paywall, but you may request a PDF from the authors here on ResearchGate.

Screenshot of the appendix of the research note containing the checklist for practitioners.

For the full details and scientific evidence behind each suggested action, I suggest you access the research note. Nevertheless, here’s my summary of their main advice on improving training transfer before, during, and after training implementation:

Before training

  • Conduct a training needs analysis to align the training’s content and participants with the organizational objectives
  • Involved stakeholders should be aware of training, understand its importance, and — obviously — be prepared for the training program. The scholars provide seven specific actions here, including the setting of personal training goals, and aligning resources and rewards with the training.
  • Training attendance should be framed as an opportunity, and the training’s anticipated benefits could be emphasized (e.g. improvement of work processes or on-the-job performance).
  • A climate which encourages learning should be created, with dedicated time (and opportunities) for post‐training learning 
    and a sense of accountability for using trained knowledge, skills, and abilities.

During training

  • Piloting the training with a single department or subset of trainees is highly encouraged. This is one way that greatly helps to assess whether the training design is appropriate in terms of content and delivery.
  • Error‐encouragement framing can influence a trainee’s learning orientation and thus errors made during training should be framed as growth opportunities.

After training

  • Use of the trained skills should be supported and planned. For instance, participants could be given a small workload reduction to provide opportunities to apply the learned knowledge and skills once they return to their position. 
  • Management and training participants should be held accountable for their use of skills on the job.
  • Think about using just‐in‐time or refresher training and coaching, if needed.
  • Assess training effectiveness criteria including training transfer using metrics and analytics. Specifically, the scholars propose that the criteria measured in the training evaluation should correspond to the training needs identified through the training needs analysis that was conducted before the training. 
  • Training evaluation criteria should consider the scope and timeframe of the training. Take into account that distal outcomes such as ROI may take longer to realize.