Friday 21 November 2014

Deliberating on deliberate practice

There is quite a lot written by others more qualified than I about students doing deliberate practise in order to learn or perfect a skill, however I've been thinking recently about deliberate practice, which is different...

Whatever you do, do it deliberately
It can become all too easy to drift along and do things because it's always been done that way, you think it's what's expected, or even because that's what the policy says. The problem comes when we forget why we're doing something, and just do it without thinking, without questioning.

Any one size fits all approach or policy risks losing sight of 'why' when deployed at an individual teacher, class or lesson level. Bloom's might be exactly the right approach to setting objectives for a particular lesson, but probably not ALL lessons. Interactive, active learning might be right at times, traditional chalk and talk might be right at others. I could go on with these...

What I'm getting at is that as professional teachers we should actively seek out as wide a range of methods and techniques for teaching as we can for all aspects of practice. We then need to use our professional judgement to select from this range, and to choose the right thing to do based on the needs of the class in front of us, and on a knowledge of our own skills and limitations. To do something because someone else has told us to do it (yes even if SLT have told us, or if it's what we believe the dreaded Ofsted will expect) is to abdicate our professional responsibilities.

Don't get me wrong, schools do need to have policies to act as guidelines (not straight jackets) and to set out minimum expectations. Similarly teachers of all levels should be able to offer suggestions to others at all levels. However we need to use these suggestions and guidelines as a starting point for a professional decision, not the end point; it can sometimes be right to ignore advice.

As a professional I would hope that a teacher both feels able in themselves, and feels empowered by their leadership, to take a deliberate decision about how to approach a lesson or other aspect of teaching. Actively choosing methods for differentiation, style of delivery, types of activities, etc. is vital. Choose because you believe as a professional that it's the right thing to do based on your knowledge of the class. When you take that decision then be willing to defend it if questioned, and be willing to acknowledge if your decision wasn't quite right. Reflecting and improving is part of taking responsibility as a professional too.

Basically teaching practice should be a conscious, deliberate act. Decisions need to be taken actively rather than received passively, and improvements actively sought.

If you're a school leader then ask yourself if you are empowering your teams to take professional decisions or giving them rules to follow? If someone deviates from policy do you start by asking them why they took that decision or by insisting they return to the policy? Have you questioned whether the policy is any good in the first place, is it possible that their way was better at that time and in that context?

This can even extend to demeanour around school, or in personal lives. Do you give off a frosty persona? Is that deliberate? Do you actively choose to be positive in your outlook? Is that deliberate?

I believe at times we all need to stop and consider if our practice is deliberate? At home, at school, as a leader, as a teacher, as a whole school, a department, at class or lesson level... are decisions taken for the right reasons?

Is your practice deliberate?

All thoughts welcome as always...

Saturday 15 November 2014

A ragging birthday!

Time flies!
Almost exactly a year ago I wrote my first blog post about RAG123 (find it here) which followed a single week trial of an idea that seemed illogical... Mark more often, but write less, improve feedback and reduce workload. Remarkably it worked - students responded positively, I felt more in control of my marking workload and my lessons were more effective. I still haven't taken a single pupil book home to mark since I started RAG123 over a year ago, but ALL of my books are marked up to date.

I've since written loads of posts on RAG123 (all found here), and tweeted prolifically on it over the past year. I know I am guilty of being a bit evangelical about it, but I do feel justified in my enthusiasm. The evidence suggests that using this approach to marking and feedback (and planning) really does have a beneficial impact both on the students and the teachers involved. I get fantastic feedback like this on a regular basis:






No going back!
As I know I can be a bit biased on this throughout the year using and developing RAG123 I have regularly asked for negative feedback or stories of people that have tried RAG123 but stopped. From the responses I have received there are only a couple of people that have stopped once tried. In these cases it was never because they didn't think RAG123 was beneficial, it was due to some external factor such as illness or a change in role. In all cases where someone said they'd stopped they followed up with a comment that they would start again as soon as their circumstances allowed. I remain open and receptive to constructive criticism of rag123 and want to retain balance on it. To be honest though negatives only really come from people who have never tried it, or haven't really understood the idea. To date the overwhelming evidence is that once you try it you will see such benefits that you won't want to go back. 


Going national and international
As well as individual teachers using RAG123, there are whole departments adopting it, and I know of a couple of schools that have adopted RAG123 as a central part of their marking policies (one has even reported it to me as a contributing factor in their schools journey out of special measures). I'm constantly being contacted by people who are sharing it within their departments, their schools or via teachmeets across the country. In fact it's also gone international, and not just in English speaking countries. I know it's been translated into Welsh (#COG123), and also is in the process of translation into Swedish...
 

So... a year later what have I learnt?
I've written, thought and learnt a lot about RAG123 over the past year. While the core idea remains exactly as described in the original post, there are a number of subtleties that I have seen and picked up over the last year. I've probably tweeted most of them at some point or other, but it's also about time I shared them all in one place. Along the way there are a couple of confessions I should make too...

Top 10 tips to get the most out of RAG123:
1. There are no strict rules for RAG123! Each teacher should take the core principle and make it work for them, their students, their school, their workload.

It makes no real difference if you use the colours or numbers for understanding or effort. It also doesn't matter if you need more than 3 levels for each aspect to fit with some other system (I know of at least one RAG1234 system being used, and there is also a RAGB123 out there). Actually you could call it anything, ABC123 would work just as well

However I do personally think colours are emotive and therefore can add to impact, which is why my preference remains RAG for effort as that's the bit I want the students to identify with the most (though for a cautionary note on colours see point 5).

2. While the process I put forward for RAG123 involves marking every day, there is no actual necessity to mark every day or every lesson. However without a doubt the more often you can manage it the more effective it will be.

Personally I try to RAG123 between every lesson but don't manage it all the time (still true even now I'm on an apparently empty SLT timetable). What you gain from doing it after every lesson is the opportunity for RAG123 to feed into planning for the next lesson, thereby improving differentiation and the impact of the next phase of teaching (for more on RAG123 as formative planning see here). I now find it much harder to plan if I've not had chance to RAG123 my books.

3. It's the 2 dimensional nature of RAG123 that brings its strength. Separating effort (student controlled) from understanding (teacher influenced) is really important.

If a student is not trying then even the best teacher will struggle to help them learn. Conversely if the student is working as hard as they can but not learning then it is the teacher that needs to do something different. This is why it's simply not the same as a plain traffic light assessment of understanding (more on that in this post). Highlighting the impact of their effort is important to students and makes direct links with other powerful things like growth mindset.

I often get asked how to measure effort, or how I decide exactly what constitutes a "green" "amber" or "red" effort? My answer is always the same - the rating should be scaled to the message you want that individual student to receive. If you think they're cruising then it's amber, if they're going flat out then it's green. It doesn't matter that one student has done half a page vs another doing four pages... If you know from the lesson that the half page struggled and persisted the whole lesson then it's green, if the four pages are all well within the ability of the student then it's amber. The brightest, best behaved students can certainly get reds if they are cruising (and they really don't like it so improve almost instantly!).

4. RAG123 doesn't and can't completely replace more detailed feedback, and I've never said that it should. Students need this, you still need to write extra at times. To help this it's good practice to aim to write an extra comment in 10-15% of books each time you mark. This hardly takes any extra time and after a week or so you can easily cover the whole class. Alternatively perhaps that feedback is verbal - which is fine too, though fails a little fouler of the dreaded "evidence for inspection". For me if you and the students are able to talk to an inspector about the feedback given (verbal or otherwise) and how it helps them to improve then that's perfectly valid feedback, but I do acknowledge that it takes a bit of confidence to fly without the safety net of written evidence. 

5. There is likely to be a colourblind student in every class group.... This was a big penny that dropped part way through the year, and I give thanks to @colourblindorg for the pointers on this. Clearly this causes tension for a system that has colours at its heart. However there is NO barrier to using RAG123 with colourblind students so long as symbols (e.g. "R", "A" or "G") are used and not simply coloured blobs/dots or even different coloured ink. Colourblindness is a big limitation for the various "purple pen of progress" or "green for good, pink to think" concepts that abound across teaching policies and #chat discussions. Using different coloured pens becomes irrelevant if colourblind students (and teachers) can't reliably tell the difference.
Colourblindness can render unlabelled R,A,G unintelligible to an average of 1 student in every classroom
The key message here is that all RAG123 posters, stickers, guidance must always have a way for colourblind people to distinguish between the colour designations - simply labelling R, A, G does this perfectly. Colours are still powerful and useful for the non-colourblind majority so I'm still in favour of using colours, but it's important we make them accessible to those that can't distinguish between them.
Just labelling R,A,G as shown above retains full accessibility for colourblind students.
6. RAG123 is absolutely a leap of faith, and sceptics take a lot of convincing!
Perhaps my biggest confession here is that despite sharing RAG123 nationally (& internationally) and even having it adopted by whole schools in other parts of the country I have not yet got it embedded across my school, or even widely used outside of the maths department.

The reasons for this are many... Perhaps I have been  being a little more shy about pushing RAG123 within my school with people who may not be actively looking for new ideas (compared to people at teachmeets, on Twitter or reading blogs who are clearly looking for and open to new ideas). There's also the fact that until September I was 'only' a head of maths and my influence only reached so far within school. Even now I'm on SLT there is someone else on the team that has the clear remit of improving marking and feedback and I don't want to step on their toes. I've spoken to them about it and actually they like the idea, but can't quite build it into a whole school position yet due to other priorities. While I do find this a little frustrating I want to emphasise that this is not a criticism of my colleague(s) across my school. They are all working immensely hard and have a real desire to do the best for the children in our care, they simply choose to do this in a different way to me and I have yet to fully do he hard sell on RAG123.

This also in no way suggests that I don't have faith in RAG123. Personally I feel my teaching would suffer massively if I had to stop, and think most people's teaching would benefit from adopting it, but I also recognise that change is difficult and it's not easy to try something like this. I know I'm not the only one that faces this challenge, Damian Benney who is the author of probably the second most read blog about RAG123 is a Deputy Head at his school but has struggled to get colleagues to try it, as detailed here. We're both completely sold on RAG123, and have had success sharing it across the country but changing minds more locally can be really hard.

7. Students need support with RAG123 to make the self reflection aspect meaningful. I've written before about how difficult reflection is so won't go into it again for this post (find more here and here), however I will emphasise that the provision of sentence starters or other scaffolding to prompt more meaningful comments really does help. It's also vital that students are given the time in lesson to review and respond to comments - if you don't demonstrate it's important they won't treat it as important.

8. Relating to the last sentence in the paragraph above... Marking & reviewing books as regularly as using RAG123 allows becomes a really powerful way to demonstrate to the students that you care what they do every lesson. This is a big point and shouldn't be underestimated. There are groups of students who don't like RAG123, when you ask them it's usually because they have nowhere to hide in terms of effort. The vast majority of students REALLY like RAG123, when you ask them it's because they know for certain that the teacher is taking an interest in what they do each day.

9. Even bad RAG123 is still quite good. I'll be absolutely honest, compared to the examples I've seen on Twitter my own practice of RAG123 is nowhere near the level that some people have adopted. In all honesty I don't know where some of the teachers that do this find the time to do anything other than school work, maybe they don't? The detail some go into with RAG123 marking is almost to the level you'd expect from a more traditional marking methodology. For me this is awesome but a little overwhelming and I wouldn't want others to think that if they can't sustain that level they are doing it badly.

What I do know is that my books are basically marked and I know the students in front of me extremely well as a result of talking to them in lessons and using RAG123 with them regularly. I also know that the lessons I plan are tuned to the progress that the students make each lesson, and therefore the marking that I do isn't pointless (see more on my thoughts about pointless marking here). I'll gladly argue my case that the progress students make is evidence that my marking and feedback is effective, even if it only results in a better planned next lesson rather than reams of written evidence in books. This will be a contentious point for many, and some may disagree completely, but that's true of so many aspects of teaching.

10. RAG123 as with all good teaching simply comes down to promoting good levels of effort from the students and good planning from the teacher. Initial users of RAG123 will often ask if a student can get a R1 (low effort, excellent understanding), or a G3 (high effort, low understanding). The answer in both cases is of course they can. For me the effort ratings should provoke the students to question what they are doing (can they try harder, can they maintain their current effort across a sequence of lessons) and the understanding should provoke the teacher to question their support/extension/differentiation for the student or planning for the class as a whole.

RAG123 and the future
So a year in and what's next. For me it's simply keeping using RAG123, I would be a worse teacher without it; I know other users feel the same. 

Sceptics will often ask for evidence that it works before trying it. I understand this but am also frustrated by it. I've tried to put together some evidence (see here) but it gets confounded by other factors, and as a result the relatively small sample size and other influences makes this limited sample ripe for taking shots at in terms of robustness of data. To accumulate enough hard data to support it (with a robust control group for comparison) would take a spectacularly long time and frankly I think it's simpler than that...


  • RAG123 costs nothing - there are no subscription fees!
  • RAG123 can be started and stopped overnight, all it takes is a decision to do it.


As such I'll reiterate the challenge that I issue whenever I present this at a Teachmeet... Try RAG123 with a class for 2 weeks. If you don't see a benefit then stop... If you do stop then that's absolutely fair enough, but please get in touch to tell me why as I'm keen to understand if it has limitations! Similarly if you find it useful then please spread the word by challenging others!

Comments are always welcome, happy Ragging!

Saturday 4 October 2014

First month as SLT

A few reflections on my first full month as part of SLT since I started my Assistant Headteacher role at the start of September...

If I had to give a single word summary it would be "busy", and perhaps most telling as part of this is that I originally titled this as "First week as SLT", but never even got close to finishing it; writing the title and pressing save was as far as I got! Anyway, these are some of the thoughts that crossed my mind during the past few weeks...

Am I still a teacher?
The first thing that hit me is that I'm now not teaching very much. It's almost a third of a main scale teacher's timetable, less than half of what I was teaching last year as a head of department. There are whole days when I don't have a lesson at all, I also no longer have a tutor group.

As a result as I started term I struggled a bit with the fact that I'm spending so little time in front of classes - the job balance is massively different and now teaching is the minority of my week. It almost causes me to wonder if I'm still a teacher. At the core I know I am, and the other things I'm now doing can have a wider impact on more students than I did before, even as a head of department. I'm loving the new pastoral side of my role, getting an overview of the college team I now lead, dealing with our students and seeing the progress they're making it brilliant.

Of course this light timetable is one of the things that can be quite divisive in schools, where the majority of the teaching staff see SLT apparently swanning about on a light timetable where it becomes the exception to be teaching.

In many jobs there is the visible bit that outside observers see, and the hidden bit that is only really visible to the person doing the job. All teachers have the visible bit when we're stood in front of a class teaching a lesson, but the invisible bit is planning and marking - hence the popular misconceptions about teacher working hours and holidays amongst the general public. The further the emphasis of a role moves towards leadership the more activities move away from visible "work" and more towards strategic activities that may be completed invisibly.

Perhaps naively I entered the world of SLT with the view that I was already really busy as a head of department, and that one of the things that caused me to be busy was the fact I still had a substantial timetable. I expected that my SLT workload could not possibly be bigger than my HoD workload; my mind argued that while I'll have more management work to do I'd also have more time to do it because I'd have more non-contact time. Don't get me wrong, I wasn't ever expecting SLT to be an easy time, I was not expecting to put my feet up in my office during non-contact times. I always will work hard, but I was fully expecting to be able to manage the workload within a similar pattern to that established as a head of department.

What I've discovered during these first few weeks is that the number of varied ways for the invisible or less visible side of the SLT role to burn up non contact time is incredible. As such right now my workload has massively increased as I often get much fewer of the management activities done in the time I have available.

Burning time
I might well start a day with just one lesson to teach, but it's not time to kick back and drink coffee all day; there are a multitude of things that will burn off that time and make you feel a bit frantic...

E-mails - I thought I received quite a few as a head of maths, it's doubled since being on SLT. Many of them don't need a response as I get copied on on all sorts, but I still need to read most to be able to decide on that. I have always found myself to be quite efficient with e-mails in terms of response times and keeping track if it all - but the recent increase in volume does threaten this a bit.

Meetings - wow there are lots of them as SLT! What with direct line management meetings, SLT meetings, meetings with parents, governors, other groups relating to your area of responsibility it's easy to fill up a large proportion of a week. Of course some are not that efficient, maybe some aren't needed at all, but as yet I've not figured out which ones...

Being the expert - heads of departments, classroom teachers, admin staff, all appear to expect SLT to have the answer to almost any question relating to the school, and can be visibly disappointed if you don't. In some ways I'm fortunate that I was promoted to AHT at the same school, meaning I do already know about the majority of the systems. However there are still a few changes or aspects new to me or new to the school this year that aren't part of my direct responsibility or past experience that have me scratching my head a bit. For those SLT who are entirely new to a school it must be doubly difficult.

Naughty students - I did a reasonable amount of this as a head of department but when things escalate further and reaches SLT you have to support the wider school staff as and when they need it. When this happens it's always going to interrupt time you'd planned to spend marking, planning, sorting e-mails, making plans for the core area of your responsibility, etc. There is no point arriving to a classroom to lend a hand if the student has already gone to the next lesson - you have to respond when you're needed, regardless of the impact to your workload.

Even when the initial incidents are over there is often time to be spent following up. This might be investigating an incident, finding a challenging student, talking with them, making plans for them with the pastoral teams, contacting parents.

In my second week I was required to write a report for our governors about the exam results from the summer. While on lunch duty on one of the days I had planned to get this report completed I had to deal with a fight between two students and then lost the entire afternoon in investigating it and finding the right response for the students involved. the right thing to do was deal with the students, but it blew my plans for the week to bits.

Maintaining teaching quality
In amongst all of this I'm still teaching, and with distractions and interruptions to time intended to be spent planning, marking, etc it can actually be a genuine challenge to keep on top of it and maintain the overall quality of teaching.

I have never bought into the idea that all SLT have to be outstanding teachers. They just need to be 100% reliably good teachers, and be able to bring out the best teaching in others (whether that is branded as good, outstanding or whatever). They need to follow all school classroom policies and model the behaviours expected in others.

As a result of this while I'm confident in my teaching I felt some pressure when planning and delivering my observed lesson this week. It's too easy to become lazy with planning if you only have one or two lessons in a day - other things float up the priority list and you arrive at a lesson only partially planned. This is compounded a little when your'e teaching in a multitude of different rooms and don't have a fixed/known set of resources to draw upon as SLT rarely get their own base classroom.

This all sounds fairly downbeat...
As I'm writing this it seems like I'm highlighting all the challenges of the job and you might thing I am regretting the move... That's not the case in the slightest. I'm really enjoying the job, it's just such a big step from where I was last year to where I am now. I've gone from feeling completely in control as a head of department to just about maintaining control as an assistant head, which brings with it a level of stress that isn't entirely comfortable at the moment. I like to feel that I know what I'm doing and how to do it - currently that balance isn't quite right but it's getting there. I've hit the ground running but the ground was already moving quickly! As time goes on I'm adjusting how I approach each week to ensure that I maintain control and can get further and further on top of things. An indication of this is that I've found time to write this post this week!

I've no idea if this post will be interesting to anyone other than me - frankly that's not the point of it. I'll try to update on my progress as AHT as we continue through the year, mainly to remind myself that I'm making progress! If you have any thoughts or comments I'd be keen to hear them.

Sunday 31 August 2014

Pointless marking?

This post is written in response to a "Thunk" from @TeacherToolkit - see here.

What's the point in marking?
Perhaps a reason that it seems nobody's answered this 'Thunk' before is that it's a bit obvious; we all know one of the basic tasks in a teacher's workload is to mark stuff. When non-teachers go on about long holidays, only working from 9 till 3 and all the standard misconceptions, teachers will universally include marking in the list of things that take up time around the taught lessons. However, if we put the preconception that marking is just a part of a teacher's being to one side, what is the actual point of it? Who gains from all this time spent? Do we do it because we want to, have to or need to? Also, is it done for the students or for the teacher?

What if we all stopped Marking?
I'm a fan of thought experiments, so let's consider a system where there is no marking at all - what would we lose? Let's take it slightly further for a second - no assessment at all by the teacher.

For the sake of argument, with no marking or assessment the teacher's role would look something like this:


At the end of each lesson the teacher would have to decide what to teach in the next lesson based on an assumption of what's been understood. Here you would need to consider the fact that an intended lesson goes through a series of filters between inception, planning, and delivery, and then again from delivery to reception and recall:


Filters from intent to recall…
1     The original intention becomes filtered to the actual plan by what’s possible given constraints of timetable, school, students, staff, resources, etc.
2     The planned lesson becomes filtered to the lesson actually delivered by real life on the day, something not quite going to plan, students not following the expected route, behaviour issues, interruptions, teacher's state of mind, detail of choices on the day, etc.
3     The lesson delivered is filtered to the lesson actually received by prior knowledge, attention levels, language/numeracy skills, cognitive load, method of delivery, etc.
4     The lesson as received is filtered to the lesson recalled by the influence of other factors such as other lessons/happenings after the event, levels of interest, and so on.

You will also see that I've separated the later 3 stages between Teacher's view and Student's view. This is important - the teacher with deep subject knowledge, knowledge of the original intention and plan, and sight of a bigger picture for the subject is likely to perceive the lesson in a different way to the students. In fact the 'Student's perspective' row should really be multiplied by the number of individual students in the class as the experience of one may well be very different to others. (Also note for reference that if the lesson is observed then there would need to be a whole extra row to cover the observer's point of view, but that's another discussion altogether...) Basically what I'm saying here is everyone in the lesson will have their own unique perspective on the learning that took place in it.

How accurate are your assumptions?
As a teacher delivering lessons with no assessment and no marking you would have to rely entirely on your assumptions of what the students receive and recall from each lesson. An inaccuracy in one lesson would likely be compounded in the next until the intended learning path is left behind entirely over a period of time. I'd suggest only the most arrogant of teachers would attempt to argue that they could keep a whole class on track and keep lessons effective without any form of marking or assessment, and frankly they'd be wrong if they tried.

Open loop control
Basically without assessment and without marking, we are using what would be called an open loop control system in engineering terms. A basic toaster is an example of a device that uses open loop control. You put the bread in and it heats on full power for a period of time, and then pops up. The resulting toast may be barely warm bread, perfect toast, or a charred mess. The toaster itself has no mechanism to determine the state of the toast, there is no feedback to tell the toaster to switch off before the toast begins to burn. To improve the system we need to close the loop in the control system; we need to observe the toast and take action if it's burning. Closed loop control is really what we want, as this uses feedback to adjust the input, which takes us to the Deming cycle...

Deming cycle = Plan, Do, Check, Act (PDCA)
Dr W. Edwards Deming pioneered the PDCA cycle in the post WW2 Japanese motor industry. His work on continuous improvement and quality management has become prolific across engineering sectors, and he is generally regarded as the father of modern quality management.

PDCA is simply a closed loop cycle, where you Plan something, Do it, Check if it did what you wanted it to, and then Act in response to your checking to develop things further. The ideal is this then leads into another PDCA cycle to deliver another improvement, with feedback being sought on an ongoing basis to adjust the inputs.

As I trained in engineering and became Chartered Engineer in my career before switching to teaching I have always seen a series of lessons as a series of PDCA cycles. I plan a lesson, I deliver it, I find some way to check how effective it was, and I deliver another one. In my best lessons I manage to incorporate a number of PDCA cycles within the lesson, adjusting the content/activities in response to the progress being made.

Marking helps us to create a closed loop system.
The model with no marking or assessment is open loop. It would rely so heavily on making assumptions about what had or hadn't been learnt that it would become ineffective very quickly for the majority of classes.

By reviewing what students have actually done in a lesson we can determine how effective our teaching has been. We can make adjustments to future lessons, or we can provide guidance and feedback direct to the student to correct misunderstandings. (note there can be a vast difference between what has actually been done and what we think has been done both at an individual and a class level)

As a result of this need to close the loop an absolutely vital role for marking is to provide feedback to the teacher on the impact of their lessons. (As John Hattie says - "know thy impact").

Is it regular enough?
Note that if marking is the only form of feedback a teacher gets then it needs to be done regularly enough to have an impact on their teaching. Between marking cycles the teacher is running an open loop system, with all the issues that this brings with it. As such we either need to mark regularly enough to keep the PDCA cycle as short as possible, minimising the time left with an open loop, or we need to build in some other form of assessment.

Other assessment
Gaining feedback within a lesson or within a marking cycle is where AFL in its truest sense comes in. Through assessment that takes place during lessons the PDCA cycle time is reduced right down - the teacher gets feedback outside of the marking cycle, meaning changes can be made either within lesson or for the next lesson. I'm not going to discuss AFL in detail here as this post is about marking, but this is why AFL is so important, particularly if you have a long cycle time on your marking. (note for the purposes of this discussion I'm drawing a distinction here between AFL techniques deployed in lesson with students present, against marking where a teacher is reviewing work when the students are elsewhere - I appreciate there can be and should be an overlap between AFL and marking, I'm just ignoring it right now)

RAG123 shortens the closed loop
You may have seen my other posts on RAG123, if not see here for a quick guide, or here for all of my RAG123 related posts. I'm sure those of you that have seen my other posts will probably have been waiting for me to mention it!

For me the key thing that RAG123 does is to shorten the marking cycle time, and that's one of the reasons that it is so effective. By reviewing work after every lesson (ideally) you augment any AFL done in lesson, and can plan to make sure your next lesson is well aligned to the learning that took place in the previous one. More on RAG123 as formative planning is in this post.

Marking for the student
I'm guessing by now that some of you will be getting frustrated because I've hardly mentioned the other purpose of marking - giving feedback to the student... After all teaching is all about learning for students!

From a student's perspective I think marking can be about many things depending on their relationship with school, that subject or that teacher. Sometimes it's about checking they've done it correctly. Sometimes it's about finding out what they did incorrectly. Sometimes they engage deeply, sometimes they dismiss it entirely (or appear to).

If we go back to closed loop vs open loop control for a moment then a lack of marking leaves the students functioning in an open loop system as well as the teacher. In engineering terms their control system needs feedback, otherwise they could go off in a direction that is nowhere near correct. Just like a tennis player benefits from the input of an expert coach to help them to develop their game, a student benefits from the input from an expert to help them develop their learning.

Hit and miss
In truth though I think marking as a direct form of feedback to a student is far more hit and miss than teachers using it for feedback on their own practice. Depending on the quality of the marking and the level of engagement from the student this could range from really informative to utterly pointless. Sometimes the best students are given poor feedback, or least engaged students fantastic feedback, arguably both are pointless. Also what seems like fantastic and detailed feedback from a teacher (or observer's) perspective could easily be ignored or misunderstood by a student. 

This potential for ineffective marking/feedback is why it is so important to try and establish dialogue in marking; again we're looking for a feedback loop, this time on the marking itself. However I'm keen to highlight that in my view dialogue doesn't always have to be written down. Discussion of feedback verbally can be much more effective than a written exchange in an exercise book, just like a face to face conversation can be more effective and result in fewer misunderstandings than an e-mail exchange.

In summary
To get back to the original question... The point of marking is to give the teacher feedback on their lessons, and to give students feedback on their learning. Both are vitally important.

The best marking has impact on the students so it changes what they do next. Good marking highlights to students that what they do is valued, highlights aspects where they have succeeded, and areas/methods to help them improve. 

However the very best marking should also have impact on the teacher and what they do next. It's not a one way street, and we have a responsibility as professionals to adjust our practice to help our students maximise their learning. For example perhaps Kylie needs to develop her skills at adding fractions, or perhaps Mr Lister needs to try a different way of describing fractions to Kylie so she understands it more fully.

In short, if you are marking in a way that doesn't change what you or they do next then you're wasting your time...

This is just what I think, of course you're welcome to agree or disagree!

Saturday 12 July 2014

Managing with colours - SLTeachmeet presentation

These are the slides I presented at #SLTeachmeet earlier today. Click here



The info shared in the presentation picks up on aspects covered in these posts:
Using measures to improve performance

Using seating plans with student data

RAG123 basics

As always feedback is always welcome...




Teachmeet Stratford, build it and they'll come

I went to my first ever teachmeet last year at #TMSolihull, then #LeadmeetCov, followed by #TMCov. I thought they were brilliant, but I was aware that I was one of only 2 people at my school that had even heard of teachmeets, let alone been to one. We were missing out on this fantastic free CPD... So with willing offers of help and general encouragement from my fellow teachmeet attendee Rob Williams (@robewilliams79) we decided to organise one...

#TMStratford was born!
Having quickly cleared it with our head (he basically greeted my suggestion with a bemused expression and "sounds intriguing, are they really mainly organised via twitter? hmm..., ok Kev we'll give it a try") I booked the venue and dubbed it #TMStratford for the first time on twitter....

Then came the self doubt...
Hang on... it dawned on me:

  • We'll need to invite a load of people, many of whom I have never actually met in person... 
  • We're hosting it at a school in which only a few people have even heard of a teachmeet, and only one person has ever presented at one.
  • We've never arranged this kind of event before - where/how do we get sponsors, etc?
  • All the people I've seen arranging this kind of thing are SLT, but I'm a HoD - can I get this off the ground and do I have time to do it?
Fundamentally: Will anyone come? Will anyone from our school come? If people come will anyone other than the two of us be willing to present? Will we end up costing the school a load of money for a real flop?

In the face of growing self doubt and uncertainty we decided to press on regardless... "how could it possibly fail!"

We built it and they came!
Just a few months later I found myself stood with a microphone in front of about 75 people, kicking off the first ever Teachemeet to be held at our school. We had prizes, flash looking IT provision, nice food laid on, even a cuddly toy to lob at those that overran their presentation length... A couple of hours after that it was all over and Rob and I were being congratulated by the head, other SLT, and attendees both from inside our school and others that had traveled further to be there. I could also flip through the #TMStratford Twitter feed and see loads of positive comments....

People had turned up!
What's more 25 staff from our school had turned up!
We had 16 presentations, including several from staff at our school taking the leap to present at their very first teachmeet!

All of the presentations given are available here: http://bit.ly/TMstratford2014
(About 35 mins of the event was recorded on video too, until the battery ran out! This will be published once I finish tidying it up...)

For a first ever event I was over the moon with it, and still am!

Key things...
I think a few things helped the event to be successful.... 
Firstly incessant publicity. I think I tweeted links to the signup page at least 100 times in the months before the event. I targeted people that I knew had been to other local teachmeets, I sought retweets from big hitter tweachers to increase visibility beyond my reach. We also sent information to other local schools and raised it over and over again in staff briefings.

For speakers whenever someone signed up I asked and encouraged them to present - remarkably lots agreed! I am massively grateful to all of those who took the time firstly to prepare something to say, but then to actually deliver it on the night - the quality of their input really made the event the success it was.

For sponsors I did contact one or two, but I was surprised how many others just got in contact once we got the publicity out there. Perhaps I was just lucky but it became really quite easy to put together prizes and freebies once this kind of thing was offered. Again I am grateful to all of the sponsors that contributed - you can see who they were on the pbworks page here: http://bit.ly/TMStratford

Finally it was the others in the school who came together behind the scenes to make it what it was. The marketing team who developed graphics and flyers, sineage on the night, etc; the IT team who dealt admirably with all the tech aspects, including a last minute projector replacement literally finished just 15 minutes before the event started; the catering team who put together some nice food to keep us going while networking in the interval. A heartfelt thanks to these teams who really helped make the event run smoothly.

Definitely doing it again
It was only afterwards that I realised how much had been pulled together to make the event work, and to some extent how stressful it had been. Regardless of the stress it was absolutely worth it, and I'm already thinking about when in the calendar to place the next one, as part of a "progamme of teachmeets" that the school is now looking to run both internally and externally.

If you've never been to a teachmeet - find one near you and get along, it's some of the best CPD you'll get, and it's free!

If your school has never hosted one then why not be the person that arranges the first one? If not you, who? If not this year, when?

(Sorry if this post was a bit self-congratulatory, it's more intended to be an illustration that you don't have to wait for someone else to organise something - just go and do it yourself, you'll be amazed at what's possible!)

Feedback welcome as always...

Saturday 14 June 2014

Powerful percentages

Numbers are powerful, statistics are powerful, but they must be used correctly and responsibly. Leaders need to use data to help take decisions and measure progress, but leaders also need to make sure that they know where limitations creep into data, particularly when it's processed into summary figures.

This links quite closely to this post by David Didau (@Learningspy) where he discusses availability bias - i.e. being biased because you're using the data that is available rather than thinking about it more deeply.

As part of this there is an important misuse of percentages that as a maths teacher I feel the need to highlight... basically when you turn raw numbers into percentages it can add weight to them, but sometimes this weight is undeserved...

Percentages can end up being discrete measures dressed up as continuous
Quick reminder of GCSE data types - Discrete data is in chunks, it can't take values between particular points. Classic examples might be shoe sizes where there is no measure between size 9 or size 10, or favourite flavours of crisps where there is no mid point between Cheese & Onion or Smoky Bacon.

Continuous data can have sub divisions inserted between them, for example a measure of height could be in metres, centimetres, millimetres and so on - it can keep on being divided.

The problem with percentages is that they look continuous - you can quote 27%, 34.5%, 93.2453%. However the data used to calculate the percentage actually imposes discrete limits to the possible outcome. A sample of 1 can only have a result of 0% or 100%, a sample of 2 can only result in 0%, 50% or 100%, 3 can only give 0%, 33.3%, 66.7% or 100%, and so on. Even with 200 data points you can only have 201 separate percentage value outputs - it's not really continuous unless you get to massive samples.

It LOOKS continuous and is talked about like a continuous measure, but it is actually often discrete and determined by the sample that you are working with.

Percentages as discrete data makes setting targets difficult for small groups
Picture a school that sets an overall target that at least 80% of students in a particular category (receipt of pupil premium, SEN needs, whatever else) are expected to meet or exceed expected progress.

In this hypothetical school there are three equivalent classes, let's call them A, B and C. In class A we can calculate that 50% of these students are making expected progress; in class B it's 100%, and in class C it's 0%. On face value Class A is 30% behind target, B is 20% ahead and C is 80% behind, but that's completely misleading...

Class A has two students in this category, one is making expected progress, the other isn't. As such it's impossible to meet the 80% target in this class - the only options are 0%, 50% or 100%. If the whole school target at 80% accepts that some students may not reach expected progress then by definition you have to accept that 50% might be on target for this specific class. You might argue that 80% is closer to 100% so that should be the target for this class, but that means that this teacher as to achieve 100% where the whole school is only aiming at 80%! The school has room for error but this class doesn't! To suggest that this teacher is underperforming because they haven't hit 100% is unfair. Here the percentage has completely confused the issue, when what's really important is whether these 2 individuals are learning as well as they can?

Class B and C might each have only one student in this category. But it doesn't mean that the teacher of class B is better than that of class C. In class B the student's category happens to have no significant impact on their learning in that subject, they progress alongside the rest of the class with no issues, with no specific extra input from the teacher. In class C the student is also a young carer and misses extended periods from school; when present they work well but there are gaps in their knowledge due to absences that even the best teacher will struggle to fill. To suggest that either teacher is more successful than the other on the basis of this data is completely misleading as the detailed status of individual students is far more significant.

What this is intended to illustrate is that taking a target for a large population of students and applying it to much smaller subsets can cause real issues. Maybe the 80% works at a whole school level, but surely it makes much more sense at a class level to talk about the individual students rather than reducing them to a misleading percentage?

Percentage amplifies small populations into large ones
Simply because percent means "per hundred" we start to picture large numbers. When we state that 67% of books reviewed have been marked in the last two weeks it conjures up images of 67 books out of 100. However that statistic could have been arrived at having only reviewed 3 books, 2 of which had been marked recently. The percentage give no indication of the true sample size, and therefore 67% could hide the fact that the next step better could be 100%!

If the following month the same measure is quoted as having jumped to 75% it looks like a big improvement, but it could simply be 9 out of 12 this time, compared to 8 out of 12 the previous month.  Arithmetically the percentages are correct (given rounding), but the apparent step change from 67% to 75% is actually far less impressive when described as 8/12 vs 9/12. As a percentage it suggests a big move in the population; as a fraction it means only one more meeting the measure.

You can get a similar issue if a school is grading lessons/teaching and reports 72% good or better in one round of reviews, and then sees 84% in the next. (Many schools are still doing this type of grading and summary, I'm not going to debate the rights and wrongs here - there are other places for that). However the 72% is the result of 18 good or better out of 25 seen, the 84% is the result of 21 out of 25. So the 12% point jump is due to just 3 teachers flipping from one grade to the next.

Basically when your population is below 100 an individual piece of data is worth more than 1% and it's vital not to forget this. Quoting a small population as a percentage amplifies any apparent changes, and this effect increases as the population size shrinks. The smaller your population the bigger the amplification. So with a small population a positive change looks more positive as a percentage, and a negative change looks more negative as a percentage.

Being able to calculate a percentage doesn't mean you should
I guess to some extent I'm talking about an aspect of numeracy that gets overlooked. The view could be that if you know the arithmetic method for calculating a percentage then so long as you do that calculation correctly then the numbers are right. Logic follows that if the numbers are right then any decisions based on them must be right too. But this doesn't work.

The numbers might be correct but the decision may be flawed. Comparing this to a literacy example might help. I can write a sentence that is correct grammatically, but that does not mean the sentence must be true. The words can be spelled correctly, in the correct order and punctuation might be flawless. However the meaning of the sentence could be completely incorrect. (I appreciate that there might be some irony in that I may have made unwitting errors in this sentence about grammar - corrections welcome!)

For percentage calculations then the numbers may well be correct arithmetically but we always need to check the nature of the data that was used to generate these numbers and be aware of the limitations to the data. Taking decisions while ignoring these limitations significantly harms the quality of the decision.

Other sources of confusion
None of the above deals with variability or reliability in the measures used as part of your sample, but that's important too. If your survey of books could have given a slightly different result if you'd chosen different books, different students or different teachers then there is an inherent lack of repeatability to the data. If you're reporting a change between two tests then anything within test to test variation simply can't be assumed to be a real difference. Apparent movements of 50% or more could be statistically insignificant if the process used to collect the data is unreliable. Again the numbers may be arithmetically sound, but the statistical conclusion may not be.

Draw conclusions with caution
So what I'm really trying to say is that the next time someone starts talking about percentages try to look past the data and make sure that it makes sense to summarise it as a percentage. Make sure you understand what discrete limitations the population size has imposed, and try to get a feel for how sensitive the percentage figures are to small changes in the results.

By all means use percentages, but use them consciously with knowledge of their limitations.


As always - all thoughts/comments welcome...

Saturday 7 June 2014

RAG123 is not the same as traffic lights

I've written regularly about RAG123 since November 2013 and since starting it as an initial trial in November I still view it as the single most important thing I've discovered as a teacher. It's now absolutely central to my teaching practice, but I do fear that at times people misunderstand what RAG123 is all about. They see the colours and they decide it is just another version of traffic lighting or thumbs up/across/down AFL. I'm sure it gets dismissed as "lazy marking", but the reality is that it is much, much more than marking.

As an example of this uncertainty of RAG123 at a surface level without really understanding the depth I was recently directed to the Ofsted document "Mathematics made to measure" found here. I'd read this document some time ago and it is certainly a worthwhile read for anyone in a maths department, particularly leading/managing the subject, but it may well provide useful thoughts to those with other specialisms. There is a section (paragraphs 88-99) that are presented under the subheading "Marking: the importance of getting it right" - it was suggested to me that RAG123 fell foul of the good practice recommended in these paragraphs, even explicitly criticised as traffic lighting and as such isn't a good approach to follow.

Having read the document again I actually see RAG123 as fully in line with the recommendations of good practice in the Ofsted document and I'd like to try and explain why....

The paragraphs below (incl paragraph numbers) are cut & pasted directly from the Ofsted document (italics), my responses are shown in bold:

88. Inconsistency in the quality, frequency and usefulness of teachers’ marking is a 
perennial concern. The best marking noted during the survey gave pupils 
insight into their errors, distinguishing between slips and misunderstanding, and 
pupils took notice of and learnt from the feedback. Where work was all correct, 
a further question or challenge was occasionally presented and, in the best 
examples, this developed into a dialogue between teacher and pupil. 
RAG123 gives a consistent quality, and frequency to marking. Errors and misunderstandings seen in a RAG123 review can be addressed either in marking or through adjustments to the planning for the next lesson. The speed of turnaround between work done, marking done/feedback given, pupil response, follow up review by teacher means that real dialogue can happen in marking.

89. More commonly, comments written in pupils’ books by teachers related either 
to the quantity of work completed or its presentation. Too little marking 
indicated the way forward or provided useful pointers for improvement. The 
weakest practice was generally in secondary schools where cursory ticks on 
most pages showed that the work had been seen by the teacher. This was 
occasionally in line with a department’s marking policy, but it implied that work 
was correct when that was not always the case. In some instances, pupils’ 
classwork was never marked or checked by the teacher. As a result, pupils can 
develop very bad habits of presentation and be unclear about which work is 
correct.
With RAG123 ALL work is seen by the teacher - there is no space for bad habits to develop or persist. While it can be that the effort grading could be linked to quantity or presentation it should also be shaped by the effort that the teacher observed in the lesson. Written comments/corrections may not be present in all books but corrections can be applied in the next lesson without the need for the teacher to write loads down. This can be achieved in various ways, from 1:1 discussion to changing the whole lesson plan.

90. A similar concern emerged around the frequent use of online software which 
requires pupils to input answers only. Although teachers were able to keep 
track of classwork and homework completed and had information about 
stronger and weaker areas of pupils’ work, no attention was given to how well 
the work was set out, or whether correct methods and notation were used.
Irrelevant to RAG123

91. Teachers may have 30 or more sets of homework to mark, so looking at the 
detail and writing helpful comments or pointers for the way forward is time 
consuming. However, the most valuable marking enables pupils to overcome 
errors or difficulties, and deepen their understanding.
Combining RAG123 with targeted follow up/DIRT does exactly this in an efficient way.


Paragraphs 92 & 93 simply refer to examples given in the report and aren't relevant here.

94. Some marking did not distinguish between types of errors and, occasionally, 
correct work was marked as wrong.
Always a risk in all marking, RAG123 is not immune, but neither is any other marking. However given that RAG123 only focuses on a single lesson's work the quantity is smaller so there is a greater change that variations in student's work will be seen and addressed.
95. At other times, teachers gave insufficient attention to correcting pupils’ 
mathematical presentation, for instance, when 6 ÷ 54 was written incorrectly 
instead of 54 ÷ 6, or the incorrect use of the equals sign in the solution of an 
equation.
Again a risk in all marking and RAG123 is not immune, but it does give the opportunity for frequent and repeated corrections/highlighting of these errors so that they don't become habits.

96. Most marking by pupils of their own work was done when the teacher read out 
the answers to exercises or took answers from other members of the class. 
Sometimes, pupils were expected to check their answers against those in the 
back of the text book. In each of these circumstances, attention was rarely paid 
to the source of any errors, for example when a pupil made a sign error while 
expanding brackets and another omitted to write down the ‘0’ place holder in a 
long multiplication calculation. When classwork was not marked by the teacher 
or pupil, mistakes were unnoticed.
With RAG123 ALL work is seen by the teacher - they can look at incorrect work and determine what the error was, either addressing it directly with the student or if it is widespread taking action at whole class level.

97. The involvement of pupils in self-assessment was a strong feature of the most 
effective assessment practice. For instance, in one school, Year 4 pupils 
completed their self-assessments using ‘I can …’ statements and selected their 
own curricular targets such as ‘add and subtract two-digit numbers mentally’ 
and ‘solve 1 and 2 step problems’. Subsequent work provided opportunities for 
pupils to work on these aspects. 
The best use of RAG123 asks students to self assess with a reason for their rating. Teachers can review/respond and shape these self assessments in a very dynamic way due to the speed of turnaround. It also gives a direct chance to follow up by linking to DIRT

98. An unhelpful reliance on self-assessment of learning by pupils was prevalent in 
some of the schools. In plenary sessions at the end of lessons, teachers 
typically revisited the learning objectives, and asked pupils to assess their own 
understanding, often through ‘thumbs’, ‘smiley faces’ or traffic lights. However, 
such assessment was often superficial and may be unreliable.
Assessment of EFFORT as well as understanding in RAG123 is very different to these single dimension assessments. I agree that sometimes the understanding bit is unreliable. However with RAG123 the teacher reviews and changes the pupil's RAG123 rating based on the work done/seen in class. As such it becomes more accurate once reviewed. Also the reliability is often improved by by asking students to explain why they deserve that rating. The effort bit is vital though... If a student is trying as hard as they can (G) then it is the teacher's responsibility to ensure that they gain understanding. If a student is only partially trying (A) then the teacher's impact will be limited. If a student is not trying at all (R) then even the most awesome teacher will not be able to improve their understanding. By highlighting and taking action on the effort side it emphasises the student's key input to the learning process. While traffic lights may very well be ineffective as a single shot self assessment of understanding, when used as a metaphor for likely progress given RAG effort levels then Green certainly is Go, and Red certainly is stop.

99. Rather than asking pupils at the end of the lesson to indicate how well they had 
met learning objectives, some effective teachers set a problem which would 
confirm pupils’ learning if solved correctly or pick up any remaining lack of 
understanding. One teacher, having discussed briefly what had been learnt with 
the class, gave each pupil a couple of questions on pre-prepared cards. She 
took the cards in as the pupils left the room and used their answers to inform 
the next day’s lesson planning. Very occasionally, a teacher used the plenary 
imaginatively to set a challenging problem with the intention that pupils should 
think about it ready for the start of new learning in the next lesson. 
This is an aspect of good practice that can be applied completely alongside RAG123, in fact the "use to inform the next day's lesson planning" is something that is baked in with daily RAG123 - by knowing exactly the written output from one lesson you are MUCH more likely to take account of it in the next one.

So there you have it - I see RAG123 as entirely in line with all the aspects of best practice identified here. Don't let the traffic light wording confuse you - RAG123 as deployed properly isn't anything like a single dimension traffic light self assessment - it just might share the colours. If you don't like the colours and can't get past that bit then define it as ABC123 instead - it'll still be just as effective and it'll still be the best thing you've done in teaching!

All comments welcome as ever!