How good do you think you are at performance ratings?
Can you be objective and keep politics, competition and money out of your score?
If you have ever seen “Come dine with me” the popular British Channel 4 tv programme you’ll know that scoring is not always fair, effective or realistic. The show pits four amateur chefs against each other for a £1,000 cash prize. They each host a dinner party for the other contestants and then each competitor rates the host’s performance with a score from 1-10. A dry and “bitingly sarcastic” narration is then added by comedian Dave Lamb.
It is fun and fascinating to watch the diners rate each other. One will hold up a 6 and say they thought the host did a fantastic job. Another will hold up a 7 and say they feel the host’s main course let them down. The third will also hold up a 7 and say they really like the host as a person, but their food was not up to standard. Some give high scores, while others play a highly political game and score everyone from a really low base – as they think the lower their competitor’s scores the higher chance they have of winning the £1,000.
In the vast majority of organisations we act as if with enough training and time, anyone can produce reliable performance ratings of other people. Even worse we assume that our people “should” know how to do this, even though we haven’t done any real training. Unfortunately, the research shows that none of us are reliable raters of anyone. This means that almost all of our people data is defective. Our belief in ourselves as reliable raters leads us to take these ratings (of performance, potential and competencies) and use them to decide who gets promoted, who gets trained on which skill and who gets paid a bonus. All of these decisions are based on the belief these ratings we give actually reflect the people being rated.
Significant research over that last 15 years has demonstrated that each of us is a frighteningly unreliable at rating other people’s performance. This inability to rate others (the “Idiosyncratic Rater Effect”) is based on our own individual context and idiosyncrasies. e.g. How we rate “potential” is based on how we define the concept, how much of it we think we have, our intent when rating (to grow them, correct behaviour) as well as our relationship (do we like/dislike them). The impact of this rating effect is persistent. No matter how much we train over 60% of our rating is a reflection of how we experience the world – not of our experience of the person we are rating. This means that when I rate you, on anything, my rating reveals to the world far more about me than it does about you. Despite the heaps of data on the Idiosyncratic Rater Effect in academic journals business tends to remain unaware of it, or even worse ignore it.
The perils of poor performance ratings:
- It is not easy to rate performance under pressure (performance appraisals are due today)
- It can be difficult to remember behaviour over a 6 month (or longer) period – it is often hard to remember last months performance
- The latest behaviour/project tends to be top of mind (good or bad)
- We often remember (or pay more attention to) “issues” over successes (the squeaky wheels get more attention)
- Performance rating objectively is hard. 4 out of 5 – what does that mean?
- How does my 4/5 compare to yours? What is the standard?
- Our emotions and perspective colour our scores. We tend to rate those we like higher than those we don’t
Poor performance ratings stem from our belief that we can rate others effectively.
Latest posts by Richard Riche (see all)
- Harnessing the power of psychological safety at work - 2 January 2019
- 5 keys to creating sustainable continuous improvement - 19 November 2018
- Using organisational voice to support Change Communication - 28 September 2018