How can group vs individual decision making be so different? Don’t think this is the case? Maybe you’ve heard something like this:
“We’re all underperforming, … except for me — and my team.”
I hear this almost as many times as I ask the question, “How’s performance?”
You know the story: A leader takes a stand declaring the obvious, “we’re underperforming…”, while protecting him or herself and compatriots, “except for me and my team.”
This is probably the most pervasive and frustrating psychological bias I come across in the work environment; evaluating and/or treating individuals differently from groups. It happens ALL the time.
But you can use this bias to influence an entire organization.
Time and again people make sweeping (impersonal) generalizations about groups, but assess close acquaintances (individuals) in a very different light. Although you would logically expect them to be at least somewhat similar processes, there’s a big difference between group vs individual decision making.
How about this one: A majority of individuals in an organization are evaluated as meeting or exceeding their objectives, yet the company as a whole is falling further behind everyday – possibly even looking at folding. How does THIS happen? Is it possible for all individuals to win but the team to lose?
I may not be able to explain why – I’m not sure anyone can to a fact. But it’s well documented that the closer we get to individual assessment, the more likely we are to change our ‘tune’ from a previously stated group assessment of which the individual may even be a member. Some can be explained by a self-serving interest (another bias), but not all of it.
One potential reason for the difference between group vs individual decision making is that judges can distance themselves from the implications of group assessment by invoking selective examples (“they lost our biggest customer”) or by simply ‘hiding’ behind an average assessment (of no one in particular). Whatever the reason, the group vs. individual assessment bias can have big implications.
So what can we do with this?
Understanding this highly predictable bias can be extremely powerful at work. Here’s an example of how ‘tweaking’ the levers of this bias can dramatically improve the accuracy of a very common process in organizations: the performance review.
The typical scene: Talent assessments generally ‘roll up’. Line supervisors make and review their team members’ assessments with next level up managers and so on to the executive level. Most of the time, these assessments (because they start with individual assessments and ‘roll up’ to the group picture) wind up revealing the “Lake Woebegone Effect” – everyone’s above average, despite the impossibility of this statistic and the organization’s performance that CLEARLY proves otherwise. An exasperated CEO then decides to ‘curve’ the ratings to bring them back in line with the “truth”. Admittedly, this is one way to compensate for the bias, but tends to lead to disenchantment in the ranks who were just told by their supervisors that they were doing fine – “above average”, in fact.
A Better Approach: Applying insight from the group vs. individual assessment bias BEFORE individual ratings leads to a very different journey and outcome.
Top executives typically meet just before the ‘talent review.’ There’s some new process, scale or software that needs to be introduced for either or both, the performance appraisal or succession planning process, and they have to set objectives for the coming year anyway.
Forget the tools and objectives for a minute.
Very publicly, but innocently, ask EACH executive team member to answer this question: “What percentage of your organization do you expect to be evaluated ABOVE expectations, and what percentage do you expect to be rated BELOW expectations? The only noise you’ll hear until the CEO repeats the question is the sound of eye’s turning from face to face as everyone knows the CEO will be very interested to hear each team member’s response. I guarantee the responses will be close to, if not lower than, the known organizations results. Be sure to be helpful by recording these numbers from each leader.
As evaluations proceed in the workforce, check in to see what the distributions of performance look like. Fully anticipating this bias to be in play, one need only to simply point out to the rater(s) that the leader of the organization, “your boss”, and the CEO, “their boss”, don’t expect the distribution to look like this. But be respectful. “Of course, I’ll take it to them if you want.”
I guarantee they’ll thank you but decline your offer until a better consensus is reached.
This is just one example where a simple understanding of this pervasive bias can be used to your advantage in controlling human behavior with psychology. If you know when and where it works, you can debunk it — or even use it to your advantage.
And it doesn’t even need to be covert.
The organization’s performance (judged as a group) is simply used to govern or frame the assessment of individuals’ performance.
You can do this every time a similar review comes around – the impact is real, powerful and almost certainly more objective. Best of all, there’s no ‘bad guy’ as a result of your action. Nobody’s being forced to do anything. The organization’s performance (judged as a group) is simply used to govern or frame the assessment of individuals’ performance.
Just another day at work — you’ve got a soccer game to attend.
Psychways is owned and produced by Talentlift, LLC.