Saturday 18 May 2013

Using measures to improve performance

An effect I've noticed managing teams both in industry and education...

To get something to improve just find a way to measure it and make the results visible to the team that are responsible.

I've found that you rarely have to make a big deal of the measurements - just make sure the team knows that they exist and make it possible for the team to access and compare them for themselves. As part of professional pride and natural competitiveness most team members will use the available data to make comparisons within the team without being asked. If they identify that their performance seems to be below others then they will usually take action to improve, again usually without being prompted or asked.

Over time this effect can raise the performance of the whole team without any specific input at a management level other than celebrating successes.

Sounds too easy?
Actually it is a little more complicated - because the most important thing is the selection of what to measure... and that is far from straightforward.

Vitally you need to measure exactly what you really want to improve, and that is not necessarily aligned with something that is easy to measure!

Using the wrong measure will give you the wrong improvement
For example in a bid to improve standards schools were told that the percentage of students achieving 5 GCSEs (or equivalents) at grades A*-C was important. So a large number of schools started using a range of equivalent qualifications that were perceived as easier to deliver then a traditional GCSE, as these helped them to improve their scores in this measure. The schools then got criticized for "cheating" the measure. However the real problem was that the measure selected was not closely enough linked to the improvement that was wanted. If the measure allows a "shortcut" then someone will find it and take it.

Similarly even when a student leaves school with a fantastic set of grades we still get employers or higher education institutions complaining that they don't have the right skills once they get there. In this case it is clear that the exam system that has assessed them as being high achievers hasn't assessed the skills that the next institution wants to see. As such is it the teacher's fault for not teaching/fostering these apparently missing skills, or is it the fault of the curriculum and assessment structure for not detecting and highlighting the lack of them first to the teacher and later to the next institution?

Processes naturally tend towards optimisation
Over time all systems will become optimised to delivering the outputs it is assessed against with the minimum effort. This is true in all sectors and all industries where humans are able to tweak the process. We naturally and often unconsciously tweak a system to make it as easy as possible to achieve the required output. This is true when an operator skips a process to save time on a production line, it is true when we choose to cut across the grass rather than walk around the path on the outside, and it is true in a classroom.

If an exam structure remains in place for some time it is natural for grades to rise as teachers become more used to preparing students for it. It doesn't mean that students have got cleverer, or that grades are being inflated, just that the system delivering them to the exam has become more efficient/effective.

It's not even really about teaching to the test (I don't think there are large numbers of teachers choosing only to teach what is examined and nothing more). However when you have a limited amount of curriculum time if you have a choice between doing something that directly contributes to exam success (that you and the school are measured on) and something that doesn't (that you're not measured on) then most people would gravitate towards the former simply because the results are more tangible later on.

Performance pay
I'm not looking to debate government policy in this blog, however if you make a link between exam results and salary levels/progression all you will do is increase the likelihood of more people teaching explicitly to the test. Not going to say any more on this, but an interesting perspective on drive and payment can be found in this video.

Implications for managing a department
Back to my real reason for writing this - how can this principle of measurement be used to improve a maths department? I think there are 6 basic steps...

1) Decide what you want to improve
2) Find a way to measure it (which may mean collecting new data or processing data in a different way) - this is the most important part - don't rush it!
3) Make the analysis of the measurement data easily available to those responsible for it - talk about headline figures, but not specifics at this stage.
4) Give the team time to draw their own conclusions on it - in my experience the majority of professionals will do the analysis and take action themselves, and the results will improve.
5) Keep measuring over time and celebrate all improvements as publicly as possible.
6) If (and only if) nothing changes over an period of time then use the data to challenge under-performance  (you'll need to use judgement on what time period to assign to this - it depends on what you're measuring and how quickly you NEED to see a change)

From my perspective the key benefit of this approach is that this brings about improvements that are organic and sustainable over the longer term; the drive for improvement comes from within the team as the result of self reflection and analysis. It also builds professional pride in improvements as team members can claim ownership of it.

By contrast improvements made as a result of specific management intervention (as per point 6) are imposed externally and therefore may lapse once the intervention is removed. Clearly this type of action does need to happen from time to time, but shouldn't be the day to day approach. (i.e. it's a tactic not a strategy)

Examples:
Want KS3 test results to improve?
Ensure everyone is doing comparable and equivalent tests and collect the data centrally (we do the same tests at the same time across our KS3 year groups). Make comparisons of result vs student targets automatically available on the central spreadsheet (needs to be clearly visible for easy comparison by class and by teacher). Just make sure the department knows the data is there - you don't need to point out who has the highest or lowest scores. Then watch as the results begin to improve over time.
[Potentially this could have even more impact if you can make the data visible to the students as well - they are responsible for the results as well as the teacher - like we did with the KS4 results in point 1 of this earlier post - a next step for us is to make this kind of information visible to KS3 as well]

Want to increase the percentage of exercise books marked with feedback to a particular standard?
Make sure the whole team knows what is needed to conform to that standard. Collect in samples of books and simply count how many met the standard. Report the findings at a department level - just highlight how many met the standard and how many didn't. Don't target individuals - if you've been clear enough about the standard then they will know which category they fall into. Do it again after a few weeks and you should see an improvement.

Fundamentally "If you can not measure it, you can not improve it" (Lord Kelvin)
Also remember that if you stop measuring once an improvement has happened then you should expect a decline over time. Regardless of how professional and dedicated your team is their focus will naturally drift to what is being measured.

Thoughts welcome
As ever I'm keen to know any thoughts on this - do you agree or disagree. Do you have examples where it wouldn't work? Could you apply it elsewhere? Do you need to share ideas about what to measure for a particular improvement? Leave a comment or find me on twitter: @ListerKev

No comments:

Post a Comment