Performance standards - are we on the same page?
Part 2 of our three parter on "performance management"
Dear all,
I’m back with the second part of this short series about “performance management” and my specific philosophies around it. It’s essentially me resharing an older post of mine in more manageable bites, and some minute changes, in the hopes that it will resonate with as many, or maybe even more people than it did last time.
In the first part we looked at why receiving major feedback as complete surprise blows. This time, I’ll delve a little bit into the shared understanding of what even constitutes “good performance” and how to make sure everyone agrees on that really important bit all around.
Rule #2: “Performance management” requires extensive foundations
It follows fairly organically from my previous point, that we can only measure performance by standards that the employee and the manager agreed upon beforehand. This requires a fair amount of preparation on both sides.
First (but not really), we need tangible answers to the question: how can I measure that my work is done well?
For instance, if my work is to ensure the quality of the product on the user end, then I know that ultimately what reflects my job being done well is happier and more trusting users. This can potentially be measured in less bug reports coming in, or maybe less of the same kind. You get the idea.
In order to do that, though, first (for real this time), I have to define what my responsibility here is to begin with. Maybe Quality Assurance is not one of my actual areas of responsibility anymore due to technical changes or restructuring, without anyone explicitly addressing that with me. Maybe my new role is updating our theoretical and research database with relevant knowledge. So then why am I keeping count of these tickets again?
Surprisingly often, people are not clear on what their role actually is. (I wanted to insert a citation here, but it was impossible to choose just one or two of the million articles that come up when you google “unclear work role in organisation”.) And unclarity in the role itself leads to arbitrary performance measures that may or may not be met. Setting “performance” standards without clarity in the role might lead us to measure the wrong things. It might look like an employee is doing everything right yet things aren’t moving forward in a way expected of the department, or it might look like the employee is not hitting goals while in the big picture everything is going more or less okay. Both of these scenarios point to a fundamental misalignment between expectations and understanding. It’s really hard to do a good job to that backdrop…
These are surefire signs that the performance standards are actually disconnected from the employee’s actual job. That’s because the performance standards need to make sense, or else they’re neither truly objective, nor transparent. Invest time in defining people’s roles together with them, make this a useful document, define performance standards, bam.
At the end of the day, these standards need to be robust enough for anyone to look at them and be able to judge whether the job is being done well.
To define roles, key result areas and performance standards in writing, I lifted the basic idea from this episode of Coaching for Leaders, and made some adjustments to fit our specific context.
But careful. Metrics, no matter how well defined, not being met is not immediately an objective sign as to whether the employee is doing a good job. That’s where assuming best of employees comes in, as well as shifting our focus from managing performance to managing resources.
I’ll tackle that one in the next instalment, so stay tuned!
And until then: Dare to do things differently. More collaboration, less exploitation. From there, we can only win. Together.
Cheers,
Emil

