Showing posts with label Leadership. Show all posts
Showing posts with label Leadership. Show all posts

Monday, 11 January 2021

Department for Education - Requires improvement or Inadequate?

Alongside lots of sectors the English exam system has been thrown into turmoil by the impact of the Covid-19 pandemic. For the first time ever exams were cancelled in the summer of 2020, and now they have been cancelled again for 2021. All of the changes made as part of these cancellations have been done under the direction and leadership of the Department for Education, with Gavin Williamson, the Secretary of State for Education having overall responsibility.

With the cancellation of the 2021 summer exam series for A-levels and GCSEs, schools and colleges responsible for exams are once again in a state of confusion and another cohort of students are currently uncertain about how their qualifications will be awarded. The exams system has now been in turmoil and uncertainty for 10 months and that is set to continue at least until the GCSE and A-level results are released in August 2021. 

Of course there have been impacts in many sectors, and the perhaps over used but appropriate labelling of 2020 and 2021 as “unprecedented times” may excuse some aspects of this uncertainty. However, when the changes to exams for 2020 and 2021 are looked at as a whole there is a pattern that emerges of missed opportunities and warnings that have been ignored, let’s start in March 2020…

Closure of schools in March 2020

When Gavin Williamson announced on the 18th March that schools would close to the majority of pupils from 20th March it instantly raised the question about what would happen for those students with exams scheduled at the end of the academic year. (announcement)

What the DfE could have done is place exams “under review” at least for a few weeks while the impact of school closures were evaluated and consultations could be progressed on possible routes for providing grades. Keeping exams under review would have given students in exam years a clear reason to stay engaged with schools and complete their courses. Through review and consultation with schools and colleges, alongside Ofqual and Exam boards, it may have been possible to establish some form of structured, standardised moderated assessments that could be conducted later in the academic year.

What the DfE actually did was immediately state on 18th March that all exams are cancelled for summer 2020. With schools closed and no exams it became clear that large numbers of Year 11 and 13 students were disengaging from their school and college courses altogether.

In April 2020 Ofqual issued guidance making it clear that any work done by students after 20th March must be treated with extreme caution (see here). This instantly made any further work set or done by examined students pointless, and therefore most schools and colleges stopped setting any at all for exam years.

The impact these decisions had is that many Year 11 and 13 students disengaged from school or college altogether, and as a result many of them will not actually have completed the courses that they now have GCSE or A-Level qualifications in.

Keeping exams under review in the early days of school closures would certainly have increased pressures on schools who would have had to include provision for exam years as part of their hurried moves to remote learning. However ruling out exams altogether at such an early stage certainly harmed the integrity of the qualifications that those students now have.

Method of awarding grades in 2020

What the DfE could have done is use Centre Assessed Grades (CAGs) alongside statistical models, and in fact this is actually what they said they were going to do (see below). This should have ensured that there was a process to compare CAGs with the grades expected from statistical models. As part of this build in allowable variances to recognise that professional judgements of the 2020 cohort may not conform perfectly to historical statistics. Where CAGs were clearly out of line with statistical models seek supporting justifications from schools and colleges (in fact the Ofqual guidance issued in April 2020 [here] stated that centres should retain records of supporting evidence “… in case exam boards have queries about the data”). If justifications are accepted then the CAGs could stand, where justifications were insufficient the statistical model may have overruled, or perhaps where there were large changes there could be an element of splitting the difference.

This would of course require discussion of contentious grades between exam boards and exam centres before results days, however there was plenty of time to plan in the processes for this. Alternatively if the results could not have been discussed in advance a fuller appeals process could have been established that would allow schools and students to submit convincing evidence where an individual was badly served by the statistical model.

Included in all of this could have been the general principle that the statistical model would not raise grades awarded by centres. While it is logical, natural even, for teachers to err on the side of optimism when awarding grades, if a professional has stated that a student should receive a B grade at A-level it is likely because they feel the student genuinely isn’t of sufficient ability to secure an A grade.

What the DfE actually did was design an algorithm that only applied statistics to the rank orders provided by centres and allowed no substantive scope for appeal.

While it was stated in the Ofqual documentation that CAGs would be considered the algorithm only considered the rank order of students with the CAGs completely ignored, even when there were multiple grade differences between the CAG and the statistical expectation.

Despite stating that schools should keep suitable evidence to support grading decisions, there were no processes in place to use this evidence as part of any appeal, and no proactive action taken to query discrepancies between CAGs and the algorithm. Where the algorithm assigned a grade that was different to the grade given by teachers the algorithm was taken as correct without question.

The algorithm artificially raised some grades when it calculated that the school was entitled to more high grades, even if the teachers knew for a fact that the students do not deserve those grades. The algorithm also lowered other grades from CAGs when the statistics didn’t fit the individuals and completely ignored the grades that teachers had assigned, with no scope for hard evidence to be submitted to challenge this.

The process even forced schools to rank students differently even if their performance was academically identical on all school measures. When the algorithm was applied students on the same CAG but different rank were treated very differently, meaning students on the same CAG were awarded different grades, sometimes 2 or 3 grades different.

Early warning of concerns of an algorithmic approach emerged with the issue of results in Scotland, and the devolved Scottish government rapidly U turned and reverted to awarding CAGs. This was ignored and dismissed by the DfE for England and A-Level results were issued via algorithm on 16th August regardless.

The appeals process established was only based on challenging factual errors in the application of the algorithm, or when there were large scale statistical differences that the algorithm failed to account for. There was no scope to appeal small scale local departures from statistics or clearly unfair outcomes at an individual level. 

On 17th August, just 24 hours after results issued the decision in England was overturned, reverting to CAGs and also applying any grade uplifts from the algorithm. This meant that not only was there an element of grade inflation from CAGs, but this was actually increased via the algorithmically uplifted grades.

On 27th August 2020 Boris Johnson placed the blame for the resulting fiasco on a “mutant algorithm”(here). As an A-Level Further Mathematics teacher who teaches a unit on “Algorithms” I can assure you that algorithms do not mutate. The key point of an algorithm is that they do specifically and precisely what they are designed to do and nothing else. The fact that the outcome did not deliver the desired results is entirely the responsibility of those that designed, tested and quality controlled the algorithm.

The fundamental flaw was that the algorithm and all the related quality assurance processes were only focused at a headline statistical level. Nationally the algorithm worked as it produced broadly the right pass rates, the same can even be said at a whole school level. However because the algorithm paid absolutely no notice to the CAGs as judged by professionals, and ignored the impact at an individual student level it completely failed to build in year to year or student to student variation.

The impacts these decisions had are massive grade inflation across GCSE and A-Levels for 2020, combined with, public uproar and loss of faith in the exam system. Delays to the decision meant that A-Level students with grades that changed missed out on university places in 2020 because courses had already been filled by other students.

Start of the school year in September 2020

What the DfE could have done is state that the intention was to run exams in the summer of 2021 but ensure that there was a parallel system in place to guide schools in building a strong evidence base to support potential CAG use in case of disruption.

Possible examples of a parallel system could be clear national standards for the timing and nature of mock exams, perhaps even specify common exam papers to use nationally so that there is a central reference point across schools. Alongside or alternatively exam boards for each subject could have established some core common assessments for each subject that can be done in schools at key points during the year to get a common base of comparison.

Schools could have been given clear and early direction on “Special Consideration” and how this applies to disruption due to the ongoing Covid-19 pandemic (Special consideration is the process through which schools make requests to exam boards to account for pupil illness on the day of an exam or other significant disruption to their exam or preparation). This guidance for example may have laid out how many days of isolation or remote learning would qualify for special consideration. Perhaps give clarity on whether it matters when in the year isolation or remote learning occurs, or whether specialist teacher absence should or could be accounted for. If the student themselves is made unwell due to Covid-19 at any point in the year should a consideration to be made? If a student is impacted by Covid-19 related bereavements of family or close friends, or of teachers should there be special consideration given? Or should account be made for student household impacts such as illness, loss of income, or furlough of parents?

The above guidance could have been put in place at any point during the autumn term, or at least plans established to put this guidance in place with a clear process of consultation and defined point where a decision would be made. Particularly as Covid-19 cases rose nationally from September through December and the fact that attendance at schools was being substantially impacted there should have been some response along these lines.

What the DfE actually did was repeatedly state intention to run exams.

On 12th October Gavin Williamson announced a delay to the exam season to allow students more time to prepare (here).  As part of this he committed that the exams would be “underpinned by contingencies for all possible scenarios”, with a promise to consult further with stakeholders and publish more detail later in the autumn term.

On 3rd December Gavin Williamson announced some further measures to support exams in 2021 (here). These extra measures included a suggestion of students being told some exam topics in advance and  more help in terms of reference materials in exams, however there was no actual detail given at this point so schools and students could not respond in any way other than preparing for exams as normal. He also announced that grading would be more generous than normal, which simply means that the grade inflation impact of 2020 would be carried across into 2021. While he announced an expert group to monitor the variation in learning across the country he gave no actual detail that schools could work with and plan on. There was no confirmation of any form of contingency planning.

When Scotland cancelled their National 5 exams in October (here) and later cancelled their Higher exams in December (here), and other home nations announced moves away from exams the DfE resisted all calls to consider alternatives and put contingencies in place for England. This position continued to be publicly reinforced, including the adherence to taking exams right up to the end of December (here). On 30th December Gavin Williamson emphasised that Exam years would be at the “head of the queue” for rapid covid testing as part of schools reopening in January (here). This position remained the case right up to the point when schools were closed as part of the January 2021 lockdown (here).

The January 4th lockdown announcement (here) stated that “it is not possible for exams in the summer to go ahead as planned.” This also stated that the DfE would “be working with Ofqual to consult rapidly to put in place alternative arrangements that will allow students to progress fairly.”

On 6th January Gavin Williamson changed direction further by announcing that all exams “will not go ahead this summer” (here). This is despite schools being expected to reopen and welcome exam years back well before the end of the academic year. Stating in January that all summer exams are cancelled closes the door to formal standardised testing across any subject, instantly reducing the potential for reliable comparisons across exam centres. This statement included the claim that “The department and Ofqual had already worked up a range of contingency options.” However the statement gave proper details on none of these options, despite apparently working on all possible contingencies since 12th October. The only fragment of detail given was to confirm that he wishes “to use a form of teacher-assessed grades, with training and support provided to ensure these are awarded fairly and consistently.” So on the face of it the plan for this year is to do the same as last year, overseen by the exact same people and regulators as last year. The only difference apparently is that this year they intend to train teachers better to make sure that they don’t make the mistakes that the algorithm designers made last year. In 2020 they made a similar statement that teachers and schools would be given guidance on how to award CAGs fairly… We know how that went.

Even with lockdown in place the government also allowed January vocational exams to go ahead. When questioned on this they advised schools that they can run the exams “where they judge it right to do so” (here)  but gave no guidance at all on how to judge if it was right or not. There are no definitions of what would constitute “safe” or “unsafe” conditions for running an exam in a lockdown and the DfE have resisted all calls to lay them down.

Throughout this no guidance at all has been given on how “special consideration” may relate to Covid, and no timeline given for when guidance might be shared. In fact there has been no indication that any account would be taken for the November series of exams, the January vocational exams that do run, or any other assessments that have or still will happen this year.

The impacts of these actions are that despite making statements that contingency plans were being made the exams have been cancelled again at short notice with no detail at all on what those contingencies will be. At every stage the DfE, Ofqual and Exam boards have been completely unprepared for disruption to the year, meaning schools are without guidance from these vital organisations. While it has been obvious since the first school had to isolate the first child in September that there would be some form of disruption to the 2021 exams, the lack of available detail or action to safeguard comparable assessments during the year shows a complete lack of foresight.

Schools, colleges, and students in exam years have once again been left with no guidance on what is to become of their qualifications. There are now genuine fears that we will have a repeat of the grades debacle that happened last time as the exact same people are at the helm and absolutely nothing has been changed in the meantime to provide more robust preparations for CAGs. Schools again are left unclear on the guidance to give to parents and students.

Overall then….

At every decision point through the pandemic there was a route that would have given a more strategic response, clearer guidance and better outcomes for students and the wider education sector. These more strategic responses do not require hindsight, they simply require foresight and understanding of the education and exams sector. If staff at the DfE and at Ofqual do not have this foresight then they need to acknowledge this and consult those that do as a matter of routine, ideally educating themselves to the point that they can provide this too.

Despite there being opportunities to find a better path at every stage the DfE has chosen to take actions that actively reduce guidance, limit the opportunities for robust assessments and harm outcomes for students. Every decision has been taken too late, and right up to point of the U turn the DfE and wider government have continued to actively brief and reinforce the previous line, even sometimes up to the point at which a different decision is announced. This is so consistently the case it could almost be viewed as a systematic attack on the education and examinations system, with last minute changes enacted specifically to destabilise the entire sector.

To make one or two mistakes or errors of judgement during the unprecedented times of Covid I could understand. To make mistakes and errors of judgement at EVERY stage however requires active incompetence. To ignore the likelihood of disruption to the 2020-21 academic year and make no contingency plans at all is a sign of complete negligence and lack of leadership. From the point of cancellation of exams in 2020 there should have been plans made to make sure we had better information to work with for 2021 and clear parallel processes in place.

Let us remember here, this is Gavin Williamson’s job, alongside all of those that work for him in the Department for Education and Ofqual (Ofqual annual budget of the order of £17.5million). Williamson is paid a ministerial salary of £67,500 per year (in addition to his basic MP’s salary of £74,962) to do this role. Given the lack of action and negligence evident in the above I need to ask what has he, DfE and Ofqual have actually been doing to improve the exam and qualification system since March 2020 (or for since he assumed office in July 2019 for that matter)?

They say “to err is human”… To repeatedly and systematically ignore advice and fail to make adequate contingency plans is at best incompetent, certainly negligent and at worst deliberately undermining. As a country we simply cannot continue treating students in exam years, their parents, and their schools like this. The Department for Education has been complicit in the undermining of qualifications for two years in a row now, in Ofsted parlance they wouldn’t just “require improvement”, they would be “inadequate”. A school this badly run would have been placed in special measures and the leadership team replaced.

Despite this catalogue of failings (and that doesn’t even scratch the surface of the turmoil in the Primary, Early Years and University sectors that he is also responsible for) Gavin Williamson then has the temerity to suggest that parents should complain to Ofsted if their remote learning isn’t up to scratch just two days after the government made the U turn to close schools (here). 

This isn’t party political – it’s about leadership, or more specifically the lack of leadership of the DfE, and that is true regardless of the political affiliations of those involved.

On reflection of all of the above I make you this offer – pay me £67,500, it doesn’t even need to be in addition to my teaching salary… in fact just pay me a basic teacher’s salary and I guarantee that I will make substantially more impact on improving education in the first 3 months than Gavin Williamson has done in the last 17 months that he has been in post…

We cannot continue with an inadequate DfE...


Tuesday, 13 October 2015

School's own data in the Ofsted inspection data dashboard

Hopefully if you are involved in secondary school data you will already be aware of the publication of the Ofsted "Inspection dashboards" which are available via the RAISEonline website.

However there is an issue with the data that will be shown in these dashboards as for many schools in 2014 there is a difference between best and first entry, meaning that the dashboards may distort the picture. For some schools this may also be the case for 2015.

Furthermore the dashboards published so far do not allow comparison of the latest data - potentially imminent in its release, however it's useful to know what to expect.

To try to overcome some of these issues while still presenting the data in a format very similar to the official dashboard I have thrown together a spreadsheet that emulates the Ofsted layout as much as I can (given Excel's limitations).

The file can be downloaded here: http://bit.ly/Inspdashboard.


The file as shared contains dummy data (mostly derived from the anonymised version of the dashboard that is available in the RAISEonline library), which you can overtype with data from your school. Obviously you'll need to do the actual calculations for the numbers yourselves, or with whatever package your school uses to process/estimate the stats for your figures. The sheet simply presents the data in a way that is similar to Ofsted's official version. Note this was made so that I could present data within my school so I'm afraid that some of the sheets are a bit fiddly to use - afraid I've not had chance to make it slick, but it does work if you take the time to enter your data! Most of the sheets are fairly obvious where the data entry goes - I'm sure you'll figure it out with a bit of trial & error.

My perferred way to print the file is to get Excel to print the entire workbook. It does come out with a blank page on P6 when doing it this way - not been able to fix that so either ignore it or don't print that one.

Hope this is useful!

Saturday, 11 April 2015

The Pareto principle and the great divide

I just scared myself looking at my last post...it was at the end of January! So far this academic year has seen just 5 posts to my blog - in previous years I averaged 1 post per week during term time. In fact I've just passed the 2 year anniversary of starting this blog and I've now written less since September 2014 than I have at any time since I started it. I think it's about time I explained why...

SLT role
Good grief it's busy as SLT! I wrote a post (here) back in October about my first month as SLT and the multitude of unexpected pressures that suck away at your time. In all honesty I've had to stop posting to my blog so regularly because I've simply been that busy that I couldn't justify any more time sat in front of a computer writing a blog post each week. I've wanted to post, in fact I've even started a decent number of them and then saved them in draft because I wasn't able to finish them to a point where I'd be happy to share them publicly. In fact half of this post was written months ago, but seems to fit with what I wanted to say today.

This isn't intended to be a whinge about workload or that kind of thing so don't worry about digging out the violins, however I do have some observations about the tensions caused by the SLT role.

Is there a great divide?
As I got closer to and then started my new role as assistant head and I became increasingly aware of the perceived divide between SLT and middle leaders/mainscale teachers. I know it's not just at my school, I've seen it at every school I've been in but it becomes really clear as you get up close to it and then jump across into SLT, I think it's particularly visible to me as I stayed at the same school.

It started with the jibes from my fellow Friday afternoon staff footballers who were counting down the number of games I was likely to be at before I become "too important" to play with them. I'm pleased to say this has stopped as I've carried on playing!

It continues with discussions about timetables where as an assistant head I teach less than half the number of lessons I did as a head of department, leading to jokey conversations about me having plenty of time to think up jobs & initiatives to keep the middle leaders and mainscale teachers busy.

The thing is the structure of a week for a member of SLT is so massively different to that of a middle leader they look like completely different jobs. Compare SLT to a classroom teacher and it's even more different - I actually teach only slightly more lessons than a main scale teacher gets for PPA.

I'm not necessarily saying its wrong, but it changes your perspective of the job massively. A marking policy that is completely manageable on an SLT timetable can become tough to manage as a middle leader, and completely impossible as a main scale teacher. Teaching a difficult group when you have loads of time to prepare/recover, plus a level of seniority to trade with, is very different to having a similar group amidst full days of teaching.

Of course much of the time not teaching as SLT is spent doing management & leadership functions, so the "loads of time" is a fallacy, but the perception is there from those outside of SLT even if it's not the reality.

I like to think I'm fairly approachable and try to make sure that I spend time talking to colleagues at all levels of the organisation, and have great working relationships across the school. A great part of that, and the fact that I wasn't always SLT at this school, means that I think some are more open and honest with me than they might be with the other members of the leadership team. I know for a fact that that some will say things to me that they wouldn't dream of saying to others in our SLT. This gives an interesting insight at times...

No time for perfection
I think the biggest actual disconnect I've discovered as part of all of this is the perception from many staff in general that all actions from SLT are deliberate, and that SLT have complete control over everything that happens in the school. Now I'm not suggesting that we go around blundering into things and have no control over what goes on, but we are a team of humans and with that comes limitations. Similarly we work in a setting that has a massive number of stakeholders with a vast array of needs and agendas, and are subject to legistative, judgemental, procedural and time constraints that limit or shape things in all manner of ways.

Sometimes the size of a team means that not everyone can be consulted properly in advance. Sometimes the speed a decision needs to be made means that only the most central consequences can be considered (see comments about Pareto below). Sometimes an element of planning falls between areas of responsibility meaning it gets missed. Sometimes a task is simply not done quite as perfectly as it might have been because a human has done it or they ran out of time. All of these are issues to be guarded against, and even planned to be mitigated, but none would be done deliberately. However I know from that the consequences of what I know to be a small human error at SLT level can be seen as a direct decision to undermine or cause issues for other staff.

I've seen a situation where a given member of SLT has been working as hard as they could on something, but due to the realities of life in schools it ended being delivered to a wider audience slightly before it was completely finished. The reception from others in the school was frosty "it's a half baked Idea", "they've not considered the impact on...", "why couldn't they have told us that weeks ago?" and so on. The expectation from the school as a body is that the SLT have all the answers and have the time to plan everything out fully. The reality is that there is so much going on that there is too often no time to complete any task fully, sometimes it just has to be good enough to get the job done.

Pareto principle for leadership
If you've not heard of the Pareto principle it stems from the observations by Italian Economist Vilfredo Pareto that 20% of Italians own 80% of the land. This has been extended to business in various ways with assertions that 80% of profit comes from 20% of your clients, or 80% of sales come from 20% of the sales team. It is also used in health and safety where 80% of injuries come from the 20% most common incidents and so on.

In my experience it can be fairly safely applied that about 80% of the behaviour incidents in a school come from just 20% of the students (you know which ones!). Similarly about 80% of the high grades come from 20% of the students. 80% of absences come from about 20% of staff, I could go on...

Furthermore another aspect of Pareto in terms of leadership is that if you consider 20% of the stakeholders in a given decision then you'll probably expose 80% of the potential issues. Due to time pressures and general day to day management constraints it is common for leaders to have to revert to this 80/20 rule in all sorts of situations in order to see the wood from the trees. Many leaders do this unknowingly, or some knowingly but rarely is it applied in a cold, deliberate way, however the Pareto Principle can be used to prioritise all sorts of things in all sorts of ways. Yes it's a rule of thumb but it actually fits reasonably well in many situations, and does force a level of perspective that can help get the job done.

Of course I'm not saying it is a good thing to be forced to prioritise like this, and certainly if you're one of the 80% of the stakeholders that is not consulted who then raises one of the 20% issues then I completely understand that you'd feel aggrieved, but that's not really what this is about. What I'm trying to say is that SLT sometimes set themselves up a little as infallible, and are often expected to perform that way by wider staff. However often what they are doing is their best given the situation they are in and the conflicting demands they have on their time. Often this means prioritising and that then leads to some people feeling overlooked.

For SLT the key thing is to acknowledge that we're doing this and to communicate more clearly with those involved. Be willing to acknowledge that at the bottom of all decisions is a human that has done their best, but may not have done it perfectly. For those outside of SLT looking in, perhaps consider if it was physically possible to do it better, or if the compromise you need to make is actually the right one for overall progress (are you one of the 80% or the 20%).

Yes SLT are paid more in order to make decisions, yes SLT have more time in the week in order to make decisions, but that doesn't change the laws of physics, and it certainly doesn't make anyone perfect or infallable. Time is finite for all of us. We all have constriants to manage, and differing perspectives.

I think the biggest thing I've learnt as a member of SLT is that I can't always take the time to be a perfectionist, but I can do the best I can given the time and resources I have available.

No, I'm wrong... the BIGGEST thing I've learnt is that once I've had to commit to something that I know might not be perfect I need to avoid beating myself up over it and constantly revisiting it. I've spent too long doing that at various points during the year and it's done me no favours. You don't needs to be perfect in order to get it right for the vast majority of the time; the important thing is to make sure that even if something isn't quite right it is still one of the least bad options!

This is just me rambling again - sharing the thoughts bouncing round my head - comments always welcome!

Saturday, 4 October 2014

First month as SLT

A few reflections on my first full month as part of SLT since I started my Assistant Headteacher role at the start of September...

If I had to give a single word summary it would be "busy", and perhaps most telling as part of this is that I originally titled this as "First week as SLT", but never even got close to finishing it; writing the title and pressing save was as far as I got! Anyway, these are some of the thoughts that crossed my mind during the past few weeks...

Am I still a teacher?
The first thing that hit me is that I'm now not teaching very much. It's almost a third of a main scale teacher's timetable, less than half of what I was teaching last year as a head of department. There are whole days when I don't have a lesson at all, I also no longer have a tutor group.

As a result as I started term I struggled a bit with the fact that I'm spending so little time in front of classes - the job balance is massively different and now teaching is the minority of my week. It almost causes me to wonder if I'm still a teacher. At the core I know I am, and the other things I'm now doing can have a wider impact on more students than I did before, even as a head of department. I'm loving the new pastoral side of my role, getting an overview of the college team I now lead, dealing with our students and seeing the progress they're making it brilliant.

Of course this light timetable is one of the things that can be quite divisive in schools, where the majority of the teaching staff see SLT apparently swanning about on a light timetable where it becomes the exception to be teaching.

In many jobs there is the visible bit that outside observers see, and the hidden bit that is only really visible to the person doing the job. All teachers have the visible bit when we're stood in front of a class teaching a lesson, but the invisible bit is planning and marking - hence the popular misconceptions about teacher working hours and holidays amongst the general public. The further the emphasis of a role moves towards leadership the more activities move away from visible "work" and more towards strategic activities that may be completed invisibly.

Perhaps naively I entered the world of SLT with the view that I was already really busy as a head of department, and that one of the things that caused me to be busy was the fact I still had a substantial timetable. I expected that my SLT workload could not possibly be bigger than my HoD workload; my mind argued that while I'll have more management work to do I'd also have more time to do it because I'd have more non-contact time. Don't get me wrong, I wasn't ever expecting SLT to be an easy time, I was not expecting to put my feet up in my office during non-contact times. I always will work hard, but I was fully expecting to be able to manage the workload within a similar pattern to that established as a head of department.

What I've discovered during these first few weeks is that the number of varied ways for the invisible or less visible side of the SLT role to burn up non contact time is incredible. As such right now my workload has massively increased as I often get much fewer of the management activities done in the time I have available.

Burning time
I might well start a day with just one lesson to teach, but it's not time to kick back and drink coffee all day; there are a multitude of things that will burn off that time and make you feel a bit frantic...

E-mails - I thought I received quite a few as a head of maths, it's doubled since being on SLT. Many of them don't need a response as I get copied on on all sorts, but I still need to read most to be able to decide on that. I have always found myself to be quite efficient with e-mails in terms of response times and keeping track if it all - but the recent increase in volume does threaten this a bit.

Meetings - wow there are lots of them as SLT! What with direct line management meetings, SLT meetings, meetings with parents, governors, other groups relating to your area of responsibility it's easy to fill up a large proportion of a week. Of course some are not that efficient, maybe some aren't needed at all, but as yet I've not figured out which ones...

Being the expert - heads of departments, classroom teachers, admin staff, all appear to expect SLT to have the answer to almost any question relating to the school, and can be visibly disappointed if you don't. In some ways I'm fortunate that I was promoted to AHT at the same school, meaning I do already know about the majority of the systems. However there are still a few changes or aspects new to me or new to the school this year that aren't part of my direct responsibility or past experience that have me scratching my head a bit. For those SLT who are entirely new to a school it must be doubly difficult.

Naughty students - I did a reasonable amount of this as a head of department but when things escalate further and reaches SLT you have to support the wider school staff as and when they need it. When this happens it's always going to interrupt time you'd planned to spend marking, planning, sorting e-mails, making plans for the core area of your responsibility, etc. There is no point arriving to a classroom to lend a hand if the student has already gone to the next lesson - you have to respond when you're needed, regardless of the impact to your workload.

Even when the initial incidents are over there is often time to be spent following up. This might be investigating an incident, finding a challenging student, talking with them, making plans for them with the pastoral teams, contacting parents.

In my second week I was required to write a report for our governors about the exam results from the summer. While on lunch duty on one of the days I had planned to get this report completed I had to deal with a fight between two students and then lost the entire afternoon in investigating it and finding the right response for the students involved. the right thing to do was deal with the students, but it blew my plans for the week to bits.

Maintaining teaching quality
In amongst all of this I'm still teaching, and with distractions and interruptions to time intended to be spent planning, marking, etc it can actually be a genuine challenge to keep on top of it and maintain the overall quality of teaching.

I have never bought into the idea that all SLT have to be outstanding teachers. They just need to be 100% reliably good teachers, and be able to bring out the best teaching in others (whether that is branded as good, outstanding or whatever). They need to follow all school classroom policies and model the behaviours expected in others.

As a result of this while I'm confident in my teaching I felt some pressure when planning and delivering my observed lesson this week. It's too easy to become lazy with planning if you only have one or two lessons in a day - other things float up the priority list and you arrive at a lesson only partially planned. This is compounded a little when your'e teaching in a multitude of different rooms and don't have a fixed/known set of resources to draw upon as SLT rarely get their own base classroom.

This all sounds fairly downbeat...
As I'm writing this it seems like I'm highlighting all the challenges of the job and you might thing I am regretting the move... That's not the case in the slightest. I'm really enjoying the job, it's just such a big step from where I was last year to where I am now. I've gone from feeling completely in control as a head of department to just about maintaining control as an assistant head, which brings with it a level of stress that isn't entirely comfortable at the moment. I like to feel that I know what I'm doing and how to do it - currently that balance isn't quite right but it's getting there. I've hit the ground running but the ground was already moving quickly! As time goes on I'm adjusting how I approach each week to ensure that I maintain control and can get further and further on top of things. An indication of this is that I've found time to write this post this week!

I've no idea if this post will be interesting to anyone other than me - frankly that's not the point of it. I'll try to update on my progress as AHT as we continue through the year, mainly to remind myself that I'm making progress! If you have any thoughts or comments I'd be keen to hear them.

Saturday, 12 July 2014

Managing with colours - SLTeachmeet presentation

These are the slides I presented at #SLTeachmeet earlier today. Click here



The info shared in the presentation picks up on aspects covered in these posts:
Using measures to improve performance

Using seating plans with student data

RAG123 basics

As always feedback is always welcome...




Teachmeet Stratford, build it and they'll come

I went to my first ever teachmeet last year at #TMSolihull, then #LeadmeetCov, followed by #TMCov. I thought they were brilliant, but I was aware that I was one of only 2 people at my school that had even heard of teachmeets, let alone been to one. We were missing out on this fantastic free CPD... So with willing offers of help and general encouragement from my fellow teachmeet attendee Rob Williams (@robewilliams79) we decided to organise one...

#TMStratford was born!
Having quickly cleared it with our head (he basically greeted my suggestion with a bemused expression and "sounds intriguing, are they really mainly organised via twitter? hmm..., ok Kev we'll give it a try") I booked the venue and dubbed it #TMStratford for the first time on twitter....

Then came the self doubt...
Hang on... it dawned on me:

  • We'll need to invite a load of people, many of whom I have never actually met in person... 
  • We're hosting it at a school in which only a few people have even heard of a teachmeet, and only one person has ever presented at one.
  • We've never arranged this kind of event before - where/how do we get sponsors, etc?
  • All the people I've seen arranging this kind of thing are SLT, but I'm a HoD - can I get this off the ground and do I have time to do it?
Fundamentally: Will anyone come? Will anyone from our school come? If people come will anyone other than the two of us be willing to present? Will we end up costing the school a load of money for a real flop?

In the face of growing self doubt and uncertainty we decided to press on regardless... "how could it possibly fail!"

We built it and they came!
Just a few months later I found myself stood with a microphone in front of about 75 people, kicking off the first ever Teachemeet to be held at our school. We had prizes, flash looking IT provision, nice food laid on, even a cuddly toy to lob at those that overran their presentation length... A couple of hours after that it was all over and Rob and I were being congratulated by the head, other SLT, and attendees both from inside our school and others that had traveled further to be there. I could also flip through the #TMStratford Twitter feed and see loads of positive comments....

People had turned up!
What's more 25 staff from our school had turned up!
We had 16 presentations, including several from staff at our school taking the leap to present at their very first teachmeet!

All of the presentations given are available here: http://bit.ly/TMstratford2014
(About 35 mins of the event was recorded on video too, until the battery ran out! This will be published once I finish tidying it up...)

For a first ever event I was over the moon with it, and still am!

Key things...
I think a few things helped the event to be successful.... 
Firstly incessant publicity. I think I tweeted links to the signup page at least 100 times in the months before the event. I targeted people that I knew had been to other local teachmeets, I sought retweets from big hitter tweachers to increase visibility beyond my reach. We also sent information to other local schools and raised it over and over again in staff briefings.

For speakers whenever someone signed up I asked and encouraged them to present - remarkably lots agreed! I am massively grateful to all of those who took the time firstly to prepare something to say, but then to actually deliver it on the night - the quality of their input really made the event the success it was.

For sponsors I did contact one or two, but I was surprised how many others just got in contact once we got the publicity out there. Perhaps I was just lucky but it became really quite easy to put together prizes and freebies once this kind of thing was offered. Again I am grateful to all of the sponsors that contributed - you can see who they were on the pbworks page here: http://bit.ly/TMStratford

Finally it was the others in the school who came together behind the scenes to make it what it was. The marketing team who developed graphics and flyers, sineage on the night, etc; the IT team who dealt admirably with all the tech aspects, including a last minute projector replacement literally finished just 15 minutes before the event started; the catering team who put together some nice food to keep us going while networking in the interval. A heartfelt thanks to these teams who really helped make the event run smoothly.

Definitely doing it again
It was only afterwards that I realised how much had been pulled together to make the event work, and to some extent how stressful it had been. Regardless of the stress it was absolutely worth it, and I'm already thinking about when in the calendar to place the next one, as part of a "progamme of teachmeets" that the school is now looking to run both internally and externally.

If you've never been to a teachmeet - find one near you and get along, it's some of the best CPD you'll get, and it's free!

If your school has never hosted one then why not be the person that arranges the first one? If not you, who? If not this year, when?

(Sorry if this post was a bit self-congratulatory, it's more intended to be an illustration that you don't have to wait for someone else to organise something - just go and do it yourself, you'll be amazed at what's possible!)

Feedback welcome as always...

Saturday, 14 June 2014

Powerful percentages

Numbers are powerful, statistics are powerful, but they must be used correctly and responsibly. Leaders need to use data to help take decisions and measure progress, but leaders also need to make sure that they know where limitations creep into data, particularly when it's processed into summary figures.

This links quite closely to this post by David Didau (@Learningspy) where he discusses availability bias - i.e. being biased because you're using the data that is available rather than thinking about it more deeply.

As part of this there is an important misuse of percentages that as a maths teacher I feel the need to highlight... basically when you turn raw numbers into percentages it can add weight to them, but sometimes this weight is undeserved...

Percentages can end up being discrete measures dressed up as continuous
Quick reminder of GCSE data types - Discrete data is in chunks, it can't take values between particular points. Classic examples might be shoe sizes where there is no measure between size 9 or size 10, or favourite flavours of crisps where there is no mid point between Cheese & Onion or Smoky Bacon.

Continuous data can have sub divisions inserted between them, for example a measure of height could be in metres, centimetres, millimetres and so on - it can keep on being divided.

The problem with percentages is that they look continuous - you can quote 27%, 34.5%, 93.2453%. However the data used to calculate the percentage actually imposes discrete limits to the possible outcome. A sample of 1 can only have a result of 0% or 100%, a sample of 2 can only result in 0%, 50% or 100%, 3 can only give 0%, 33.3%, 66.7% or 100%, and so on. Even with 200 data points you can only have 201 separate percentage value outputs - it's not really continuous unless you get to massive samples.

It LOOKS continuous and is talked about like a continuous measure, but it is actually often discrete and determined by the sample that you are working with.

Percentages as discrete data makes setting targets difficult for small groups
Picture a school that sets an overall target that at least 80% of students in a particular category (receipt of pupil premium, SEN needs, whatever else) are expected to meet or exceed expected progress.

In this hypothetical school there are three equivalent classes, let's call them A, B and C. In class A we can calculate that 50% of these students are making expected progress; in class B it's 100%, and in class C it's 0%. On face value Class A is 30% behind target, B is 20% ahead and C is 80% behind, but that's completely misleading...

Class A has two students in this category, one is making expected progress, the other isn't. As such it's impossible to meet the 80% target in this class - the only options are 0%, 50% or 100%. If the whole school target at 80% accepts that some students may not reach expected progress then by definition you have to accept that 50% might be on target for this specific class. You might argue that 80% is closer to 100% so that should be the target for this class, but that means that this teacher as to achieve 100% where the whole school is only aiming at 80%! The school has room for error but this class doesn't! To suggest that this teacher is underperforming because they haven't hit 100% is unfair. Here the percentage has completely confused the issue, when what's really important is whether these 2 individuals are learning as well as they can?

Class B and C might each have only one student in this category. But it doesn't mean that the teacher of class B is better than that of class C. In class B the student's category happens to have no significant impact on their learning in that subject, they progress alongside the rest of the class with no issues, with no specific extra input from the teacher. In class C the student is also a young carer and misses extended periods from school; when present they work well but there are gaps in their knowledge due to absences that even the best teacher will struggle to fill. To suggest that either teacher is more successful than the other on the basis of this data is completely misleading as the detailed status of individual students is far more significant.

What this is intended to illustrate is that taking a target for a large population of students and applying it to much smaller subsets can cause real issues. Maybe the 80% works at a whole school level, but surely it makes much more sense at a class level to talk about the individual students rather than reducing them to a misleading percentage?

Percentage amplifies small populations into large ones
Simply because percent means "per hundred" we start to picture large numbers. When we state that 67% of books reviewed have been marked in the last two weeks it conjures up images of 67 books out of 100. However that statistic could have been arrived at having only reviewed 3 books, 2 of which had been marked recently. The percentage give no indication of the true sample size, and therefore 67% could hide the fact that the next step better could be 100%!

If the following month the same measure is quoted as having jumped to 75% it looks like a big improvement, but it could simply be 9 out of 12 this time, compared to 8 out of 12 the previous month.  Arithmetically the percentages are correct (given rounding), but the apparent step change from 67% to 75% is actually far less impressive when described as 8/12 vs 9/12. As a percentage it suggests a big move in the population; as a fraction it means only one more meeting the measure.

You can get a similar issue if a school is grading lessons/teaching and reports 72% good or better in one round of reviews, and then sees 84% in the next. (Many schools are still doing this type of grading and summary, I'm not going to debate the rights and wrongs here - there are other places for that). However the 72% is the result of 18 good or better out of 25 seen, the 84% is the result of 21 out of 25. So the 12% point jump is due to just 3 teachers flipping from one grade to the next.

Basically when your population is below 100 an individual piece of data is worth more than 1% and it's vital not to forget this. Quoting a small population as a percentage amplifies any apparent changes, and this effect increases as the population size shrinks. The smaller your population the bigger the amplification. So with a small population a positive change looks more positive as a percentage, and a negative change looks more negative as a percentage.

Being able to calculate a percentage doesn't mean you should
I guess to some extent I'm talking about an aspect of numeracy that gets overlooked. The view could be that if you know the arithmetic method for calculating a percentage then so long as you do that calculation correctly then the numbers are right. Logic follows that if the numbers are right then any decisions based on them must be right too. But this doesn't work.

The numbers might be correct but the decision may be flawed. Comparing this to a literacy example might help. I can write a sentence that is correct grammatically, but that does not mean the sentence must be true. The words can be spelled correctly, in the correct order and punctuation might be flawless. However the meaning of the sentence could be completely incorrect. (I appreciate that there might be some irony in that I may have made unwitting errors in this sentence about grammar - corrections welcome!)

For percentage calculations then the numbers may well be correct arithmetically but we always need to check the nature of the data that was used to generate these numbers and be aware of the limitations to the data. Taking decisions while ignoring these limitations significantly harms the quality of the decision.

Other sources of confusion
None of the above deals with variability or reliability in the measures used as part of your sample, but that's important too. If your survey of books could have given a slightly different result if you'd chosen different books, different students or different teachers then there is an inherent lack of repeatability to the data. If you're reporting a change between two tests then anything within test to test variation simply can't be assumed to be a real difference. Apparent movements of 50% or more could be statistically insignificant if the process used to collect the data is unreliable. Again the numbers may be arithmetically sound, but the statistical conclusion may not be.

Draw conclusions with caution
So what I'm really trying to say is that the next time someone starts talking about percentages try to look past the data and make sure that it makes sense to summarise it as a percentage. Make sure you understand what discrete limitations the population size has imposed, and try to get a feel for how sensitive the percentage figures are to small changes in the results.

By all means use percentages, but use them consciously with knowledge of their limitations.


As always - all thoughts/comments welcome...

Saturday, 3 May 2014

Policies not straightjackets

I'm starting to lose track of the number of times I've heard or seen people say that they can't do or try something because it's out of line with their school or department policy. It really worries me when I hear that - it means they feel unable to innovate or experiment with something that could be an improvement.

Most often for me it's linked with RAG123, but I've seen it at other times in school, and all over the place on twitter too. It normally goes something like this:

  • Person A: "Why not try this (insert suggested alternative pedagogical approach here)?"
  • Person B: "That sounds great and I'd love to, but our policy for (same general area of pedagogy) means I can't try it."
Frustratingly this is usually where the discussion ends - the opportunity for person B to try something new that might improve their practice and improve outcomes for their students is squashed.

More specific examples I've actually seen/heard over the years include:
A: "For that lesson why not try using a big open ended question as your learning objective that all students work towards answering?"
B: "I can't because we're required to have 'must, should, could' learning objectives for all lessons"

A: "Could you re-arrange the tables in you room to help establish control with that difficult group? Perhaps break up the desks to break up the talking groups?"
B: "No because our department policy says we have to have the tables in groups to encourage group work."

A: "Why not try RAG123 marking?"
B: "I can't because our marking policy requires written formative comments only."

What are policies for anyway?
Policies should be there to provide a framework of good basic practice that all in a given organisation can use as a bare minimum to baseline their practice. However there is a difference between a framework to guide and a set of rules to be applied rigidly.

For example a policy that says that learning objectives must include suitable differentiation for the class being taught is substantially different to saying that all lessons are required to have Must, Should, Could learning objectives. One is  the essence of what we really want, the other is a single, rigid example of how this might be achieved. One allows the teacher to use their professional judgement to set objectives in a way that is appropriate for their relationship with that class and the material being taught; the other applies a blanket approach that assumes that every lesson by every teacher with every class is best set up in an identical fashion.

For me policies should set out a standard that is the bare minimum to ensure that the students get a good deal in that aspect. For example if a teacher is unsure of how often to mark their books the policy should clarify the minimum requirement, it should also detail what minimum information is needed in order for it to count as good marking.

However policies should never stifle innovation. Should never prevent the trial of something that could be even better. They also shouldn't dictate set structures that can't be deviated from under any circumstances - it should always be allowed to do it better than laid down in the policy!

Teachers as professionals should always have the option to deviate from the policy if it will produce better outcomes for their students in that particular situation (and if this becomes a consistent improvement then perhaps the policy should change to incorporate the deviation so that everyone benefits). However as professionals they should be both able and willing to justify a decision like this if questioned. Similarly if they have deviated from policy to try something that turns out to have not been so good then as professionals they should acknowledge this and return to the policy.

Consistency not uniformity
The bottom line is that policies should ensure a consistency in quality of experience, which mustn't be confused with a uniformity of experience. Quality in education is about high standards, high expectations and about professionals making informed decisions about how to get the best from the students in front of them. Quality is not about every teacher doing exactly the same thing in exactly the same way, if it was we could record model lessons and just play them to students, or just learn scripts to follow.

Uniformity and rigidity isn't the answer to the multi-faceted challenge that teaching presents; we can't always assume that one size fits all. Therefore policies should never be straightjackets. Policies should be guidelines and bare minimums, with innovation and improvement specifically allowed and encouraged.

Comments always welcome - I'd be interested to know your thoughts. :-)


Saturday, 5 April 2014

Blogging birthday post!

My first ever post was published a year ago!! I remember being nervous about putting my thoughts out there for the world to see...

1 year later and this will be my 45th published post. More astounding to me though is that people have actually read any of them! This little blog has now clocked up almost 25,000 visits! I'm flattered that anyone takes time to read this collection of ideas that rattle round my head, so if you're reading this - THANK YOU - I know you don't have to, and this has never been about self promotion.

I was asked recently how many words I'd written as part of this blog - I worked it out the other day and was shocked to find that even excluding this post it's well over 46,000 words! (Note to self - must work on brevity!) When I discovered this massive word count I wondered which words I'd used, so I used Wordle to summarise it - I was quite pleased with the result...

Alongside this I've found myself with almost 800 followers on twitter - which also amazes me!

So what is it all about?
During the year I've blogged about various topics, from leadership to homework, to SOLO, to marking - you can find links to these topics on the right of the blog page. More recently I've been a bit preoccupied with RAG123 marking, which has dominated posts since November, but frankly it's such a powerful bit of practice I think it as deserved the priority I've given to it. (by the way - if you've not heard of RAG123 then have a look at my posts - I promise you won't regret it!)

As an indirect result of twitter and blogging, and the reflective, innovative practice that it encourages my department and I have:

  • Developed feedback and marking processes
  • Innovated RAG123
  • Created departmental CPD open days
  • Written on just about every surface in the classroom with magic whiteboard, chalkpens, etc.
  • Analysed summative tests to make them formative using various tools.
  • Created graffiti walls for revision
  • Made question grenades
  • Developed the use of QR codes, bitly links and videos for flipped learning in our department
  • Created maths plasters
  • Developed seating plan formats to show student data easily
  • Introduced SOLO to our school
  • Used formats to share learning objectives with KS5
And there are loads more things - I'm amazed how long that list is already.


Has it been worth it?
Without a doubt yes! 

Of course it takes time to write posts - sometimes more than others, but it's actually never felt like a drag. I've also surprised myself in that I've never once struggled to find a topic to write about - stuff just happens and I think "that'll make a good blog post." I wrote a post (here) in June considering why I had bothered getting involved in blogging and twitter and whether it was worth it. Having read back over that post I don't think I can improve on it, I've honestly never looked back since starting.

It has been a bit of a journey this year; discovering twitter and blogging as a CPD tool has been a massive eye opener. It's transformed my practice in many ways, and helped me to lead my department with new ideas. Comments and feedback from the blog and from tweets have helped me to improve my classroom practice, and writing about it has helped me to be more reflective about it too. Furthermore being on twitter has helped me to be more connected to the wider world of education than ever before. I'll continue blogging and using twitter because it makes me better at my job.

A new phase begins...
This week has been fairly momentous for me. In addition to the anniversary of blogging I was successful in an interview on Monday, and will now be joining my current school's SLT as an Assistant Headteacher from September.

I can honestly say that the combination of blogging and twitter helped me to be a much more credible candidate for a role on the school's SLT than I would have been without these connections. It really came home to me when I sat down to prepare for the interview and realised that I was already fully up to speed with latest thinking about lesson observations, assessment structures, learning theories, inspection frameworks, etc. In fact in some aspects I was more in touch than the majority of the existing SLT - all through keeping up to date with my twitter feed and reading other people's blogs!

I was also able to ask for advice and tips on my application for this SLT post from contacts in my personal learning network who I didn't know at all last year! There were a couple in particular who helped by proof reading my application form and offered encouragement and suggestions. I'll not name you in case you get inundated by requests from others, but you know who you are and thank you!

I fully intend to keep blogging as I make the transition from middle to senior leadership, and hope that it continues to shape my practice, and contribute to the future of my school and its students.

So after all that I think this post is really a thank you to anyone who has ever taken the time to read, comment, tweet or follow as part of my journey in the last year. I've certainly enjoyed it and think I'm a better teacher and leader as a result... I hope you've found some of it useful!

Here's to another year - I wonder what will develop in the next 12 months...!

Saturday, 15 March 2014

What can education learn from Quality Management?

Bit of a big post this - it's been evolving in my drafts file for a while now...

I learnt a lot about management and quality during 10 years working in the automotive industry. Due to the complexity of the products this is an industry that has been at the forefront of quality management since the 1950s. Cars are the most complicated consumer product in the world. They are built in vast numbers, require tens of thousands of components to work together when operated by relatively untrained drivers in a massive array of conditions. What's more failures have the potential to be both catastrophic and fatal. Also they are almost all built to a very tight budget, meaning that waste needs to be eliminated from all parts of the business to allow a car company to turn a profit.

The reliability of modern cars is truly gobsmacking, and it is due to the fact that the automotive industry are global leaders in the field of quality management. Versions of the approaches pioneered in the automotive sector are now deployed across manufacturing, and are even being used as models for business management in many non-manufacturing settings. However even the most forward thinking of these alternative applications are usually about 5-10 years behind the latest in the automotive world.

I'm sure having got this far you're now thinking "Kev's lost it, he's gibbering about cars, I thought he was a teacher and this was a blog about education?" Well you might be right! But I really do think there is much to learn about the idea of quality in education.

Don't panic, I'm not about to insist that schools are like production lines, propose time & motion studies or some hideous Fordian uniformity to classrooms; I guess I need to explain where this is coming from in terms of how quality management evolved... (but if you want to cut to the chase then scroll down to the "lessons for education" heading)

An evolution of practice
All quality assurance or management systems currently in place in any industry today have a lineage that traces back to the automotive sector in post war Japan. Before this point it was all about "quality control" - essentially inspection at the end of the production line to check that things were correct. This was ok, but it was only partially effective at catching all potential problems. You physically can't check for everything. As a result some things slipped through the net and caused issues in the field.

From control to assurance
To improve on the partially effective "control" system the emphasis shifted to "quality assurance". The premise was that rather than inspecting at the end of the line where tests were limited and rectifying errors was expensive and took a long time, why not do that inspection earlier in the process? Perhaps even before the parts are fitted? Or even delivered? The automotive firms pushed chunks of their inspection processes back up the production line all the way into their supplier's factory. The idea being that if all the parts arrived certified as "good" then the resulting car must be "assured" to be good. They still inspected a smaller number at the end of the line, but quality had improved as faulty parts were often detected before they were fitted.

However there was still a level of variability inherent in the design. Humans build it for a start, and we make mistakes! Even certified parts have to be supplied to geometric tolerances that cause variations, and other physical, chemical or even biological variations can creep in when you make large numbers of components or large numbers of vehicles. (Biological - really? Well for example loads of car components are made from rubber, which is a natural product. It literally grows on trees! As such, until relatively recently when it has become better understood and controlled, rubber components on cars were subject to variability depending on the season in which the natural rubber was harvested! As another example the quality of some car paint finishes can be affected by the type and quantity of deodorant used by the operators working in the paintshop!)

Error proofing
The next development in quality was to start to manage it from the outset. To do things with the design that prevented errors, or make the performance of the completed vehicle tolerant to variability of its components. Japanese terms like "Poka-yoke" are now commonplace in car design - it means "mistake proofing" and helps to remove human errors on the production line. For example, if an operator has to connect 3 different electrical plugs in the same area of the car each plug should be designed such that it only connects to it's correct socket, leaving no room for human error.

Assurance becomes management
By taking the focus away from inspection/control, wherever it is in the process, and looking in more detail to the systems and processes quality becomes managed rather than assured. This means designing OUT variability, designing IN error proofing, planning for quality from the very start of the process rather than applying it as an inspection at some point.

Continual improvement requires empowerment.
However perhaps the most powerful thing to come out of the Japanese automotive industry was the concept of continuous improvement, and with it the empowerment of everyone in the business to make suggestions to improve the product. This is often referred to by the Japanese term "Kaizen" (literal translation "improvement" or "act of making bad points better"). Toyota used this word to brand improvement activities in its factories and visiting western engineers and managers adopted the word.

At the heart of Kaizen is a philosophy that improvement must be lead from the top, but not directed from the top. Every worker in the factory has their part to play in the quality of the end product, and as such every worker has the right to make suggestions about how to improve it.

Vitally this includes the idea that the person that fits the brakes all day becomes an expert in fitting brakes. As such this person is very well placed to make suggestions about how to minimise errors when fitting brakes. This applies across the whole vehicle and as a result the shop floor assembly workers have a big voice in improving designs and optimising the processes.

LESSONS FOR EDUCATION
Firstly, we're currently broadly applying the "quality control" type of management. We inspect at the end of the process, both by assessing student's progress through final high stakes exams, and by the use of increasingly high stakes observations/assessments for teachers (I say increasingly high stakes due to imminent explicit linkage to pay structures in the UK), and high stakes inspections for schools.

The best schools will use more of a quality assurance model. "Good" practice will be embedded in school and departmental policies to reduce variability in practice between staff. However too often these are policed and enforced through inspection (e.g. observations, work scrutiny, learning walks). To take this on to the next level the structures need to be put in place to make good practice, and therefore success of the students, inevitable.

Developing systems in which good performance becomes inevitable can only come if the people doing the processes are inspecting it themselves. It becomes less about doing it well because you are being watched, and much more about not needing to be watched because you are watching yourself.

Ok that last paragraph sounds like a load of idealism, but if we pick up the Kaizen model in education and truly empower teachers to improve their practice they will feel a much greater ownership of it. By encouraging this ownership we make it much more likely that they will do it.

For example, who is best placed to formulate a marking policy that is workable for a teacher with a full mainscale timetable? It certainly isn't best done as a decision by someone that only teaches a partial timetable. Setting out a basic framework that includes the key characteristics of good marking and then asking teams of all staff to develop a way that this can be done in a manageable way would create a policy much more likely to be adhered to.

Just this week I was told about someone who wants to try #RAG123 marking (not heard of RAG123? It's awesome - see here!). They aren't allowed because it doesn't conform to their school's policy. Under their school's policy this person recently spent over 2 hours marking just 7 books, and they are saying that the quality of their planning is suffering due to the volume of marking they have to do! Nobody on a full timetable could possibly sustain that type of marking alongside teaching, planning and having a life. This is a clear example of a solution being imposed on people without thought to actual delivery.

Similarly it is really important that there is a route and process for all staff to highlight where the school is not working efficiently. For example are there flaws in the school sanctions and rewards system meaning that a group of teachers are struggling to use them effectively? Feedback loops are important in industry, and should be in education too. Vitally though if feedback is sought and given there MUST then be action with support from the top to address the concerns and improve the situation.

Where is the value added?
It is recognised in the automotive industry that the only people actually adding value are those building the cars. They take the components and combine them into something that can be sold at a profit. Everyone else and every other process in the organisation is an overhead that chips away at profits. They may be absolutely needed as part of the long term business, but they still cost money. As such these other processes need to be as efficient as possible, and mustn't interfere with the effectiveness of the production line.

In schools it would be too simplistic to suggest that the only people that add value are the teachers, as it's the total experience at school that's important, and not always just the lessons or exam results. However systems in the school absolutely mustn't make the jobs of those that interact with students harder to do well.

Consider how easy it is to get accurate data about a specific student... Attainment, targets, behaviour, attendance, SEN, FSM, IEP, etc, etc, Is it all in one place or in lots of different places? Is it easy to download a class list of information in a usable format? I know I've worked in schools where each bit of data is stored in a different place, and in different formats.

I've heard regularly that schools do well by prioritising on Learning and Teaching. The question "If it's not improving Learning and Teaching then why are we doing it?" pops up as part of this kind of thing. However how often is that really applied across ALL systems in a school? It may well be applied to guiding CPD, or some directed time activities and meeting agendas, but is the attendance system actually optimised to stop it interfering with learning and teaching? Is the behaviour system an add on administrative activity or an integral part of learning and teaching?

Loads of education practice, particularly on the administrative side, is based on finding a system that basically works, and then iterating as needs change. Sometimes this creates a real monster of a system with add on bits and extra files all over the place. For example all schools I've been in have slightly different ways of managing student data, sometimes this is based on the specific skills or preferences of the staff involved in creating them, or just on how it's been done for years. Sometimes these systems are brilliant, other times they are ineffective. New staff come in and have to learn the foibles of a particular system, and then all the various "work around" methods to get key bits of data in the right format.

It is incredibly rare to find a system that has been completely designed from the ground up to do its job in a way that is completely aligned with the needs of all of the users in the organisation. Process mapping and optimisation of processes are effectively alien terms in education, but they really shouldn't be.

In summary - we need to start actively managing quality
Basically what I'm trying to illustrate is that any push for improving "quality" within a school needs to aim far more at the quality management end, which is the cutting edge of quality practice; as opposed to the quality control end which is a blunt and inefficient instrument.

We need less direct inspection to enforce systems from the outside, and more design of systems to make good performance inevitable. We mustn't invent extra processes to fix problems; instead we should develop systems that simplify the job rather than making it more complicated.

Like choosing to walk across the grass or around the path, people only deviate from policy because there is a shortcut. We need to seek out and use the expertise of the people that will actually work with the policies the most to help redesign them to eliminate the shortcuts! If we make it hard to do the right thing then we can't be surprised if someone does it wrong. The purpose of good leadership and management must be to design and environment where we make it as easy as possible for our staff and students to do it right. That way success becomes inevitable.

As always I'd welcome any feedback and comments... :-)