Measuring up – on evaluating performance

I have seen numerous performance evaluation systems throughout my career. Few, if any, of them have struck me as very effective. Most seemed to lack basic fairness or a meaningful connection to anything outside of the evaluation system itself. More often than not, they were motions we went through because someone said we were supposed to.

Old school #measure

Old school #measure

In business school, you learn that the way most companies do performance evals is completely bonkers. There are many reasons why. A big one is that most humans just aren’t very good at evaluating others’ performance, particularly on intangible tasks (which, of course, defines many knowledge workers’ jobs). It also turns out that most people are extremely uncomfortable evaluating others at all – let alone in person, and least of all for people with whom they frequently interact. Psychology research is legion with repeatable examples of how tiny, random environmental influences – “priming” – can dramatically change people’s perceptions and moods. (Protip: have an annual review coming up? Arrange it somewhere bright and yellow.) Humans simply cannot escape our psychology, which makes us notoriously unreliable.

Yet despite all this, performance evaluations exist, as they should. Some employees do contribute more than others. Companies need a way to identify and reward top performers and help lower-performing ones improve (or leave). Inevitably, compensation and promotions are involved as well, making fine distinctions about who contributes what, and how much, highly desirable. And of course, you can’t forget the role performance evaluations have as legal CYA when it comes to disgruntled employees.

Figuring out how to accurately measure, forecast and improve employee performance in a complex organization is incredibly difficult. Virtually no one has solved it – certainly not me. Yet drawing the right conclusions about performance is the difference between rationally managing your company’s precious resources, versus relying on emotional whim and political wrangling.

Collaboration is a headache for measurement

Back on the assembly line, evaluating employee performance was easy. Employees were interchangeable cogs, and could be easily (and accurately) compared by number of widgets they assembled per hour. But most knowledge workers aren’t like assembly line workers. Quantitative output is explicitly not the goal – rather, it’s about quality, creativity, innovation, effective collaboration, strategic understanding, communication, execution… the list goes on. Not only are these things more complicated than a simple numeric quantification, but they’re also exactly the kind of intangibles that humans are demonstrably awful at evaluating accurately or fairly.

Teams or business units as a whole are often evaluated (by necessity) on some kind of hard, often financial, metric: conversions, ACV, user engagement, new pipeline created, etc. Progress towards these goals is usually driven by individuals working in collaboration within networks – and yet, with the exception of edge cases of obvious incompetence, individually attributable, incremental contributions towards those goals is often extremely difficult to unwind.

We all know people who try to overcome this by engaging in “conspicuous productivity” – calling lots of status meetings, over-sharing progress updates, aggressively CCing email, humblebragging about how busy they are, etc. This can give the impression of being more productive than is actually the case because it increases an individual’s visibility.

People do this when they don’t know how else their work will be appreciated and valued: if you fix an important feature, but your boss doesn’t notice, did it ever happen? This is the kind of question that drives managers, not to mention executives, nuts, because they just want the job done. But for the rank-and-file, matters of visibility and gaining profile matter an awful lot, because they’re often what gets you noticed and promoted (to the level, ironically, where you have the luxury of being annoyed by concerns about visibility by your direct reports).

The alternatives

All of this puts managers in a difficult position. Managers of teams whose work is intangible may develop a good sense for who’s contributing more and who isn’t – or they might not. The reality is that the traditional career track typically leads to people-managing responsibility whether or not the person promoted is skilled (let alone trained) in performance evaluation. It’s one of those things that everyone thinks they know how to do, but which psychologists consistently show is not the case.

Facebook Social Graph 3360-degree feedback systems are a better, but still imperfect, approach. In collaborative teams, 360-degree feedback can give a better holistic sense of which team members are contributing, and in what ways. Online systems that offer anonymity and specific, understandable evaluation criteria are critical to making such an approach work without baked-in bias. Companies like Microsoft and IBM are racing to develop workplace collaboration tools that can generate social graph reports for specific teams, which shows who is communicating with whom, how often, and at what times. It might seem a little invasive, but that kind of data can be a valuable insight into how (or whether) those teams collaborate effectively towards their goal.

It bears mentioning that the old stack-ranking performance and incentive system is pretty much inimical to all of this, however. Where the leadership decides ad-hoc that only X% of employees will be high-performing, Y% will be average and Z% will be below-average, irrespective of the actual performance distribution in the organization, employees are incentivized to compete with one another for those scarce high-performance scores. Strategic gaming inevitably ensues, particularly in bigger organizations, as competitors are asked to rate one another. It’s a recipe for a toxic environment.

How to think about performance

I don’t have a different system for performance management. But understanding the limitations of what performance we’re able to measure in the first place, and the biases inherent in human psychology that consistently and predictably warp our evaluative ability, is critical to understanding how companies can best use the talent under their roof.

It’s also important because a huge swath of the business literature out there radically exaggerates the scope and meaning of corporate performance evaluations – and, thus, comes to all the wrong conclusions. Anytime you see a study referring to “top performers” in a given company, cue immediate skepticism – how were they designated as top performers? Was it by their colleagues, executives, or via some sort of objective standard? All of this methodology varies by firm, and makes comparisons between them, in my opinion, highly suspect.

Legacy performance management systems are one of the many holdovers from old-fashioned analog business culture that still persist, but its end is coming. “Work” is becoming more collaborative and digital every year, and the tools we have to measure its impact are quickly evolving (though not nearly fast enough). New systems to manage employee performance will be as much about taking advantage of these tools as about cultural change itself – the hardest rock of all to budge. Fortunately, nothing forces culture change like a crisis.

 

 


One Comment

So, what do you think ?