Welcome


My interest in the idea of sharing pedagogical purposes comes directly with the contact I have had with the Project for Enhancing Effective Learning at Monash University in Australia. Now each of these teachers were very active in establishing learning agendas with their classes. The impact they were having was inspiring. Each classroom tool can have a purpose beyond delivering content, and this needs to be shared.
I suppose the purpose of this website is collate, crystalise and open dialogues about how to increase this within classrooms. As the quote from Carl Bereiter illustrates this classroom methodology can empower our students.

Showing posts with label self assessment. Show all posts
Showing posts with label self assessment. Show all posts

Tuesday, 13 December 2011

SOLO taxonomy, planning and progress..

Today, a group of year students have been completing an extended writing task based around the wonderful PEEL strategy "fact in fiction". Since we had not seen each other for a week I structured a few tasks to remind them of the content ( and resources) and to facilitate the connection of concepts.


First, was a group task that had them as a list key ideas about the Immune system, a chance familiarise themselves with the content once again. A pure multistructural task.


Next  they were to define and distinguish between some key vocabulary. Words I knew they had struggled with in the previous lesson. The distinguish element to this was to ensure a relational understanding.
Another relational task followed, but this time choice was given  either comparing two ideas using a comparison alley or using an analogy map to create mental models for how these two ideas work. I was pleased to introduce choice as some of the analogies were very revealing. I particularly enjoyed the students likening lymphocytes to a bottle of  bleach as they release a chemical against pathogens.


All of these tasks were designed to be about five minutes in length and very much focused on the content. The next one was more complex but I'm hoping will lead to a detailed sequence. Again a relational task but required several pieces of complex information being used. The keywords here are the conceptual parts of this content.


The final task was to complete the fact in fiction task. This involves writing a fictional story inserting relevant facts along the way, but the clever bit is the insistence that the facts are underlined along the way. This encourages the use of key vocabulary and regular reflection. It is obvious when work is lacking in the content, this visual nature makes it easy for the students to see omissions and flaws in their work. There are opportunities for students to work in an extended abstractt way here and at the very least it encourages relational thinking. (See below for an example) 


SOLO taxonomy has helped the planning by making it easier to see the increase in the demand of each task and focus on the key connections important for understanding this topic. It is in this way that knowledge and understanding can be built, and teachable moments found and then focus upon what matters, in understanding and for the students at that time. Simple everyday tasks are easily sequenced to plan for (more) complex responses in student work. 


 An example of Fact in Fiction task


A weary travellers tale.
 (A fact in fiction writing task)
 Bob and Billy are twins. Identical twins the same in every way. They have just returned from holiday in China. Bob is jet lagged but is generally just dandy! Billy is not. He is feeling unwell. He has a fever, diarrhoea, and a rose coloured rash. Five weeks before they went, they had an appointment to be vaccinated for Typhoid and Malaria. Unfortunately, Billy had double booked with a hair appointment. His hair look superb on the plane, even the air hostess said so! 


 Your task is to complete the story of Bob and Billy outlining how Billy gets better with the help of his immune system. You must use as many scientific facts as possible. Underline them as you go. Make sure you include the following


  •  How the white blood cells in Bobs body are working to protect him. 
  •  Name the two types of white blood cell
  •  How they immune system will fight the disease
  •  What its means to be immune 
  • What a vaccination is and how it works. 
  • Which disease Billy has, why you think this and what type of microbe is causing it


 Good luck.

Student work using fact in fiction 












Monday, 25 January 2010

Students DO make progress with SOLO Taxonomy.

It is a fine feeling when your gut feeling has a concrete and real basis. This is a reflection of my first terms use of SOLO taxonomy.

The graph above shows a year 7 mixed ability Science class working over four SOLO and assessed tasks. It clearly demonstrates that my students are making progress up the SOLO Taxonomy scale in a measurable fashion. I think this graph also shows my progress in using SOLO in providing quality feedback. I am aiming to reflect on some of the processes and discussion that occurred in between these points. (I will dedicate the final paragraph to explaining how this data has been compiled.) Although, I must point out that the scores discussed here are solely for analysis and reflective purposes and the students have not and will not have it reduced down to a number. SOLO taxonomy provides a framework for discussion of work during lessons, for comment only feedback and for self assessment for the students.

Although the first two points clearly show progress form a score of around 0.5 to 1.0, I believe this is only establishing a useful baseline assessment of my students as they enter their new school. I think it also shows the lack of my skill in using this tool, as 0.5 indicates that half the class are operating on average at a prestructural level. and that the progress made suggests that on average everyone is working at the unistructural level. In reality two students had progressed to the Relational level while not one single student recorded a multistructural level score on the first task. So, I am not overwhelmed by its immediate impact, but it's certainly seemed to move the students in the right direction. I must also confess that my strategy to introduce a generic SOLO was very much by stealth. It was visible in the class ,but, only gradually dropped into the conversation when opportunity presented. The first two pieces of work were self assessed by the students using a content specific criteria using the SOLO structure. No attempt was made to introduce the language straight off. I don't know if this was the right thing to do, but, I felt that the students had a lot to cope with managing the transition to a new school.

The third activity here was a group presentation, so I feel that this has skewed the data a little, as some of the weaker kids scored their highest scores in this activity. This makes me want to underplay the progress here, as this suggests that on average the class is now at Multistructural and the raw data shows no one has remained prestructural, pleasing as this is it is not assessing individual performance.
However the value of this activity is significant, as it is this stage that I and my students began to fully come to terms with using it. The students made a presentation that required using several pieces of information/data to draw a conclusion. I listened to them and made notes and then together we used the evidence gathered to assign their group a level. This generated great dialogue which I believe has had a lasting impact. Some students were even saying things along the lines of " If we had of said this ......would we have get the next level".

Following this activity I made a presentation at the Teachmeet Northeast, about SOLO taxonomy where I came up with an analogy of its use. I decided to use it with my students and formally introduce SOLO taxonomy . It went down a storm with some students even recognising and identifying the levels before I had explained them, maybe stealth works as a teaching strategy. The students were also provided with a large copy of the taxonomy for their desks as they worked as a reference. The students went onto complete the fourth activity in this review. Half way through I took several statements from student work and asked them to assess where they thought it was, and how to improve it. I felt at the time this discuss was useful so I photographed the examples. Whilst marking the students scripts I was especially pleased by the amount of crossing out, arrows to add new details in and squashed sentences into gaps that really were not there. Pretty no, but indicative of reflection and demonstration of their understanding of not only the content of the lesson but of SOLO taxonomy too




So, overall this activity yielded a score of around 2.5 on this scale indicating that the majority of the students where working at the relational level albeit within small bits of content. This has been the thrust of my feedback about broadening out teheir knowledge base so that more and a wider range of connections could be made. All in all this set of data demonstrates the students making progress through the use and structure of the SOLO taxonomy, in a relatively small period of time.

How the data was produced.
Each SOLO level was assigned a score from 0- 6. (I have meddled with the structure slightly.)
The number of students at each level were tallied for each activity and multiplied by that score.
The score was then divided by the number of students in the class to give an average score per student.
The score has then been plotted against the order of these SOLO assessed tasks.

Tuesday, 17 November 2009

Self Assessment- What value does it have in Enquiry based learning?

I'll start by saying I am not a fan of the Assessing Pupil Progress (APP) chart, I feel it gives tasks to do rather than describe the thinking behind how to do the task. Presented here is a Science example although, I think this will be true in all subject areas. I am sure the interesting and useful learning points contained within the APP are not the completion of these task but the thinking necessary to be able to do these independently. So for my subject how to think like a scientist is more important than being able to draw a line graph.
So, I have spent an inordinate amount of time trying to plot the thinking behind key scientific skills. Although these examples are subject specific I would hazard a guess that all subjects will be able to plot out the developmental thinking that a student needs to take to make progress within these skills. I have broken these down into 24 seperate thinking skills. Some of these are sub divisions of different kinds of thinking. For example Logical thinking has three parts:- Using data to support theories, deriving ideas from other ideas and linking cause and effect. Each one has been placed into a level ladder so that students have some form of pathway to follow.This has been a time consuming task, I have read a ridiculous amount of academic texts trying to find developmental hierachies. I am not saying what I done is accurate, but, they have given a clear focus to my teaching and have also broken down the skills into steps, so that clear targets can be given. I have also correlated these across the APP so that they have a concrete link to the levels ladders.



This example show with * where the criteria matches the APP, with strand 5.

Students have recently completed a Scientific enquiry, designed around some of these( 8 of the
24 skills). This was an open ended Science Enquiry, requiring students to do a lot of thinking for themselves or at least make some decisions after being guidded through the thinking.At the end of this task they self assessed with a teacher discussion and guidance to grade their work/thinking during the enquiry. Many were able to justify the levels they awarded themselves.

This is what they tell me about

1. How accurate are they at levelling their own work?

2. Is there any difference between ability groups in this?

The following based upon a random sample of 38 students work, producing 200 self assessed levels over 8 of the APP style criteria.

How the data works. Each SAT level is divided into 3 parts ( for example 5c, 5b 5a leading to 6c,6b,6a). So a score of 3 is one complete SAT level. All levels are compared against end of year reported teacher derived SAT levels.

Despite students only awarding themselves the correct SAT level ( level 5, 6 etc) 26% of the time the students were overall fairly close to their reported SAT levels. From the 200 levels derived by this method 37% were the same as the teacher levels. They underestimated them by a factor of 2.02, meaning that they were around 2 division of a SAT level, for example they said a 5c and the reported level was 5a.(this would be a scoreof 2, as would a 5a and a 6b).The spread of the student self assessed levels is 2.5 so that all students are on average within one level of their reported level, backing up the previous measure. So, it is okay to trust the data produced by students.

This underestimation is not a concern and I am tempted to consider it of value, suggesting that the students have thought about where they have rated themselves. Pleasingly it also suggests the levels within the ladders are of some accuracy ,contain real challenge for the students and in some way are a rigorous form of assessment. Consider a set of data that completely matched their teacher assessed levels, would you firstly trust it? I wouldn't. It would also be impossible to identify areas to work upon to improve.

Inevitably some of the Skills identified will be more difficult to do than others, especially when the students have never previously been asked to think like them. So, I think the data asks a lot of useful questions about how to go about developing these skills and their genuine importance in the learning of Science. I am hopeful that these level ladders will help, as students have identified themselves across the board, on each of the 8 skills in this sample.

A few interesting pointers have also come to light when looking at how the consistency of the students across the ability range. Although the same size is slight homogenous, and therefore prone to skewing effects, there is some genuine food for thought.

Firstly that the level 5 students correctly identify their SAT level 40% of the time and the levels 3 and 4 students 33%. Compared to only 21% of the level 6 students. ( No difference was seen with the top end level 6 compared to the low end). Why is this? Could it be due to the more able students being more reflective about their learning? Could it be down to these students understanding the criteria better? The data suggest yes, the students who most undersestimate their grade is the level 6 students, by a factor of 2.65, gladly still within one level. The level 5 students underestimate by a around half a level (1.8) while the level 3 and 4 overestimate their ability by a small amount (0.33) or for example from 3a to 4c. Another possible explanation is that this form of assessment may actually be testing genuine student ability. Its test what a student can do not what they can remember or have the ability to write down. I hope so. Oh dear I'm beginning to defend the APP!

The big thing I'll take from this is that the first draft of my version of the Science APP is just that a draft. Some of it will need rewriting to make it more accessible for lower ability students and clearer in making it explicit between the levels. But, it does seem to be close and the evidence suggests its a useful thing to have in the classroom.

I have fourteen of them written and in need of refining, four crudely done and six that have not been started! If anyone is interested in giving me a hand let me know!