We know that recruitment and selection processes are often the focus of bias awareness training. The levels of decision-making required in assessment are high and there are all sorts of pressures on judgements.
But what do we know about Performance Management?
There is a general assumption that existing relationships may be less prone to the immediate inferential biases that we make when assessing, but is this so? What does the research suggest? And what does our practice with clients tell us?
In a meta-analysis study (Javidmehr et al, 2015) looking at the nature and impact of biases in performance management, the authors identified the most common biases within the performance management process.
Outlined below are an illustration of the different biases that may be influencing your managers and employees during the performance management process.
Halo and horns effect
This is the tendency to rate an employee uniformly high if that employee is particularly strong in one quality, or consistently low if the employee is particularly weak in one quality. This was the most frequent type of bias reported in the literature on performance management and was also reported as the most difficult to correct.
Where does it stem from?
Managers and team leaders are encouraged to build strong relationships with their team members. Managers report using halo ratings most often when they are concerned about disappointing their team members. In other words, they will find it more difficult to deliver critical (or negative) appraisals when there are some things that are going extremely well.
What we know on the basis of in-group and out-group dynamics, and homophily, is that managers and leaders are more likely to favour someone who is like themselves over someone who is not like themselves, whether by gender, ethnicity, or individual differences – whatever characteristics we feel are important.
So not only do we have skewed feedback or ratings, but we have inherent bias existing in the relationship being worsened by the performance review process.
Leniency bias
Organisations introduce ratings in the performance management system to provide a more data-driven approach to what is otherwise a very human activity. In most performance management processes there will be a required rating of performance – whether that’s specifically about behaviours or a universal overall rating.
It is natural to assume that a rating scale increases objectivity and reduces bias. It’s factual. It’s got a number. It can’t be biased.
Response biases occur when respondents complete rating scales in ways that do not accurately reflect their true responses.
Our own training in Performance Management frequently highlights that ratings are directly affected by a respondent’s wish to provide a response that is socially desirable. In different contexts, this will cause a rater to make ratings toward the positive end of rating scales, known as leniency or acquiescence.
In some cases, individuals only use the lowest scores (severity) or tend toward the middle of the scale (moderation). These habits are common in many of us.
Leniency in performance management rating scales is a perennial problem for anyone trying to normalise performance management data. It is always great to hear when an organisation has 90% of its employees rated above average, but in itself that makes no sense. Many of the organisations we work with then have to redistribute ratings, bringing into question the value of the original data.
Obviously, the introduction of 360-degree feedback is designed to counter the impact of halo bias. It pulls together a variety of different views so that an employee receives a wider range of perspectives. But we know, because we train people to use rating scales in assessment and development centres, that some people are naturally far more lenient in their ratings.
This may be for a number of reasons – having a ‘development mindset’ and a desire to see people grow into the role, rather than be put off by critical feedback, is the most frequent recorded cause.
Interestingly, leniency bias increases when performance reviews are linked to pay and reward, so there may also be something about helping other employees and being seen to be a supportive, kind person, rather than tough or critical.
In addition, it’s important to be aware that leniency and severity increase where there is no requirement for a written supporting statement – something that organisations are starting to remove.
Contrast bias
One of the most widely used visual images used to highlight the contrast effect is the famous Ebbinghaus illusion.
This simple illusion highlights that, even when we know that the centre circle is exactly the same size, our brains can’t help but see the circle in the context of the surrounding circles.
In doing so, we typically underestimate the size of the circle on the left and overestimate the size of the circle on the right. And that’s when we know they are exactly the same size.
In the research on performance reviews, contrast bias is frequently cited as a significant challenge in accurate and fair reviews.
The introduction of behavioural criteria is an important step in reducing bias, but we know from working on assessment and development centres that even in controlled environments with clear behavioural indicators, it is very easy to resort to making comparisons to others, rather than the scale.
There are two other effects worth highlighting here.
The first relates to the number of team members a manager is reviewing. Contrast bias effects increase significantly when the size of the manager’s team increases. Perhaps not surprisingly the more ratings I make as a manager, the more I use team members for comparison.
The second is an order effect. Individuals who are rated first are rated higher than those evaluated last. And if the time gap between the evaluations is large, the effect is larger.
So if you’re in a big team, you are more likely to be rated positively if you are first in the cycle. And the obvious risk? If you’re in a big team and not particularly close with your manager, you may be further down the list of review appointments – or have your meeting moved more often – thus possibly increasing the odds of lower ratings based on order.
Priming
We know the influence of priming is extremely strong in the recruitment process. When somebody says “They just wrote the best analysis report I have ever read” we know that, try as we might, it is very difficult not to feel compelled to see the positives in the same employee when we work with them. We like to find explanations and confirm what we believe we already know.
In the team context, any manager will frequently hear about the members of their team from other colleagues. These may be subjective, passing comments, but will still be considered to be ‘feedback’ that will – even if it’s not shared with the employee – shape any judgement or tone of a conversation.
Self-rater bias
There are of course many other ways that bias distorts the performance review process – the accuracy of the ratings, or the quality of the conversation. I have focused here mainly on the role of the rater, but let’s not overlook the impact of the way that employees self-rate and the impact that this has.
In an effort to increase perceptions of fairness and the engagement of employees in the appraisal process, many employers ask their employees to rate their own performance and to discuss their self-assessment with their manager.
However, similar to manager assessments of performance, subjective self-assessments allow for the possibility of bias influencing ratings. Subjective self-assessments of performance can be higher or lower than objective measures of performance and can have an anchoring effect on manager assessments.
Members of traditionally underrepresented or marginalised groups can also come to internalise bias and stereotypes and this can lead to lowered self-assessments of performance. Studies repeatedly show that women are more likely to underrate on self-assessments while men are more likely to overrate.
Also, whereas women tend to credit their achievements to the efforts of others – such as their workgroup or to luck and their failures to intrinsic flaws -, men tend to credit their achievements to their intrinsic strengths and failures to external circumstances. There are also cultural differences in self-ratings related to variations across cultures in cultural norms regarding self-promotion.
Underrating one’s performance is more common among members of cultural groups that value humility such as collectivist cultures. Other individuals have a strong personal aversion to self-promotion due to their socialisation or personality. Introverted personality types are more self-aware and introspective and may rate themselves lower than extroverts.
Individuals high in perfectionism or a fixed mindset orientation might rate themselves at lower scores than individuals with a lower propensity for perfectionism or a growth mindset.
There are many biases that will influence the judgements and decision-making during a performance review cycle. While we often believe that having competencies and shared standards of performance should be enough to guarantee fairness and consistency, when human beings are involved then decisions can be subjective, biased and unfair as a result.
Looking to improve your performance management process? Learn more about our inclusive performance reviews or contact us via info@pearnkandola.com