As an HR professional, when we ask ourselves questions about the performance of employees within our organisation, we inevitably find ourselves comparing the performance between groups of people.
The answer to these questions may become the justification or validation of an investment or intervention made by your organisation into its human resource practices. 
Getting quantitative about performance: Let’s consider an example. Your organisation wants to determine whether your workforce’s engagement has increased since the rollout of an employee wellness program. You have engagement data from previous surveys, prior to the wellness initiative, and you use the intelliHR Automation Suite to pulse out a new engagement survey for comparison. You receive a good response rate from the survey, however, as would be expected, there is variation in your results. Some employees are incredibly engaged, others aren’t, and the rest are somewhere in between. How do you know whether you have seen a real increase in engagement, or whether it can just be put down to ‘dumb luck’? The field of statistics yields the answer by revealing the likelihood that the results you observed came about by random chance or were indeed the groups are significantly different. The effective application of statistical analysis helps you to quantify previously ‘finger in the wind’ HR decisions. 

Hypothesis Testing in HR: When addressing questions such as those above, we are essentially wanting to find out whether the performance results observed, despite some variation came from the same population, that is, there is no real difference. Alternatively, if the difference between the two groups is extreme enough, there is evidence that it is very unlikely that they could have been sampled from the same population. In statistics, this is called a hypothesis test. Hypothesis tests (remember Ztests, ttests, ANOVA etc.) boil down to the principle of a signal to noise ratio. Just like in an old television, where a high signal to noise ratio results in a crisp, quality image on your screen, a strong signal to noise ratio in statistics gives a clear picture that there is a significant difference between the two groups you are testing. When comparing groups, the ‘signal’ corresponds to the difference between them and is quantified by the difference between the two means. The ‘noise’ is the variation within the group, that is how far is each result from its own average. In statistics, the ‘noise’ is measured by is measured by the standard error. Have a look at the picture below. How confident are you that there is a real difference between these two groups? Let’s revisit the earlier example. You ran an engagement survey in March, where from 77 employee responses, you determined an average engagement score of 5.14 on a 10 point happiness scale. Following the implementation of your wellness program, you use intelliHR to pulse out a followup engagement survey to the same employees and from their responses you calculate an average engagement score of 7.59. We analyse the happiness ratings of employees before and after. You analyse the data visually, using the filters in the Form Data analytics tool for the survey form. This looks like a fairly positive result, but if considering whether this wellness program is worth the continued investment, how confident are you that the intervention caused an actual improvement? Maybe the difference is just down to random chance? Did you just catch your sample on a good day the second time around? 

What are the chances? Statistical tests return the probability that the results you observe could have arisen out of natural variation. In this example, human happiness is variable, even among the same people, so you can expect that there would be variation when you take measurements at different times. What are the odds that we would see the improvement that we did if the difference could only be explained by random variation? We consider two scenarios: Ho: There is no real difference between the observed results. Ha: There is a difference between the two results. To find the answer, we can export the data to Excel using the intelliHR Export feature. Once in a program like Excel you are able to run various analyses, to determine the strength of your results. The eye is a great statistician! You don't always have to be a data scientist to effectively analyse performance in your business. Combined with your intuition as an HR professional, good data visualisation can be an extremely powerful analytical tool. Let's visualise our engagement data from our previous example: Using Excel's chart function you can display your data in a boxandwhisker plot which provide a lot of insight into the signal and noise discussed above. Reading a boxandwhisker plot: each division represents a quartile of your data. That is, 25% of your data lies in each of the top and bottom whiskers and 50% lies within the box. The middle line refers to the 50th percentile or median. The 'x' refers to the mean or average. The dots outside of the plot refer to outliers, which are points sufficiently far away from the mean. It follows that a plot that is more spread out, with outliers, represents more 'noisy' data which will make significant differences harder to detect. Presenting the data this way allows you to visualise the signal to noise ratio, fundamental to statistical hypotheses. By inspection, the above graph would provide you with strong evidence that there is a meaningful difference between this group's happiness across the period tested. Using a TTest: Depending on the type of data you have and the way it was collected, there are different analytical tests you can use to determine the significance of your results, and Excel has a range of them in its Analysis Toolbox. In this case we use a Paired TTest as our samples are related; we tested the same employees before and after the intervention. Ttests examine two samples and return the probability that the two means are the same, that is, whether the two samples were taken from the same population. If so, there is no significant difference between them. When we conduct a Paired TTest on our data from this engagement study, it returns a pvalue of less than 0.01. This means that there is less than a 1% chance that the difference we observe in the results tested before the intervention and the results after came about due, simply, to sampling variation. This means that our original hypothesis: that there is no real difference between the observed results, is extremely unlikely. It follows that the results have, in fact, improved from March to October and your wellness program has had a positive impact on engagement. You can now pitch, with confidence, a strong business case to continue this investment into the future. 


Alert: Statistical tests, such as the Paired TTest mentioned in this article are powerful powerful tools to introduce empirical evidence to human resources decision making however the incorrect application of a statistical test can produce extremely misleading results. Each test carries with it certain assumptions about the data and the way it was collected, and if these assumptions are violated, the results of the test can be incorrect and may compromise subsequent decisions. 

Comments
0 comments
Please sign in to leave a comment.