Policy Development Workshop – Assessment for the real world

12/09/2014 On 10 September AQA held the second of three policy development workshops as part of our major project The future of assessment: 2025 and beyond. This event was based on the theme of ‘assessment for the real world’, looking at vocational and practical education and assessment.

With this theme we hoped to inspire a discussion regarding how to assess skills which aren’t covered by traditional written examinations. Attitude, professionalism, ethics, and teamwork are just some examples of the skills required in many vocational professions, from medicine to beauty therapy. We wanted our guests to consider how these skills could be reliably and validly assessed in the future, and what policies might help to achieve this.

Our first speaker was Professor Prue Huddleston, Professor Emeritus at the Centre for Education and Industry at the University of Warwick. She posed some questions for our guests to consider, including wondering if there was a difference between vocational education and occupational training, and whether assessment should focus on what learners know or on what they can do.

Professor Jen Cleland, John Simpson Chair of Medical Education at the University of Aberdeen, was our next speaker. She is an expert in medical education, and took us through some of the examples of assessment used in medical training and throughout medical careers. She explained the difference between technical skills, such as taking blood pressure, and non-technical skills, including teamwork and talking to patients, and she emphasised the difficulty in measuring and assessing the latter of these skill-sets.

BOB_3218

Following the speakers, our guests discussed some of the issues relating to vocational education and assessment. One of the recurring questions asked was about investment in these qualifications – do we as a society value some vocational professions more than others, and invest more time and money into developing more rigorous qualifications in these areas? There was also consensus among the guests about the need for transferability of vocational skills assessment, particularly for young people who are likely to have several jobs across different sectors.

Our audience, which was made up of professionals and practitioners from across the education and skills sectors, split into three groups and were joined by our speakers for some smaller, more concentrated discussions. We focused on three key areas – what the standards and outcomes of these qualifications should be, who could define these standards, and how can we evidence outcomes to create confidence in these qualifications.

One of the key conclusions the groups came to was that employers should have a role in both developing and defining the outcomes of any vocational qualifications, although this should be carefully balanced with the involvement of education practitioners. However, the groups also recognised that employers are not a single homogenous group, and they have different needs and requirements according to their specialisms. Additionally, our guests thought there should be a greater effort to build more of an evidence-base around vocational qualifications, in order to change their perception and to increase the reliability. One idea suggested which could help evidence the value of the qualification was to use destination data to show the progression of students who had completed it. The use of technology could also play a role in this, as a means by which students complete assessments designed to test skills which traditional written exams cannot properly measure.

The final event in our project will be held on the 15 October on the theme of 21st Century Assessment, where we will discuss how technology can make assessment more relevant, valid and practical. If you would like to register for this event, or if you have any comments about the project, please see the details in this blog post.

Our thanks go to our speakers and guests who helped contribute to such an interesting discussion.

Policy Development Workshop – Balancing Assessment and Accountability

10/07/2014On 8 July AQA held the first policy development workshop of our major project The future of assessment: 2025 and beyond. The theme of this event was ‘balancing assessment and accountability’.

This theme was intended to provoke discussion about the purpose of assessment, how it is currently used and how it might instead be used in the future. Additionally, we wanted delegates to consider accountability in England, and what would be the ideal system if we were free from any current constraints.

Tom Sherrington, Headteacher at King Edward VI Grammar School in Chelmsford, was the first speaker. He talked about the difficulty in comparing similar grades across different subjects, and advocated a move to a baccalaureate model to give an equal standing to different types of qualification.

Next, Laura Dougan and Margaret Miller from the Scottish Qualifications Authority (SQA) spoke and explained the learner-centric assessment system in Scotland. They discussed some of the recent changes to assessment in Scotland, including the greater involvement of teachers and the increasing amount of internal assessment, as well as the new Insight online benchmarking tool which will be used by schools to identify areas for improvement.

Invited guests from across the education sector then discussed potential policy objectives in small groups, joined by our speakers. Each group debated the points raised by the speakers as well as contributing their own hopes and expectations for the next 10-15 years in assessment and school accountability.

Seminar One

Most of the groups spoke about a ‘triangle’ of validity, reliability and accountability, and the struggle of balancing all three of these. Some key desires which emerged were for Ofsted to be more collaborative with schools and teachers and not to focus so much on data; for assessment to follow curriculum design and not the other way around; and to see a separation of assessment and accountability.

Our main aim was to reach a consensus on the three most important issues relating to the balance of assessment and accountability in schools, and the three points below were all discussed at length by everyone at the event.

The groups all agreed that they would like teachers to be more involved in assessing, with some delegates suggesting there should be greater professionalisation of teachers as assessors, similar to other countries such as Germany. This is linked to a desire for more trust in teachers to be able to effectively teach and assess their students.

One theme which emerged was the hope that accountability would move away from being based on exam results, with some agreeing with Tom’s idea of a baccalaureate style qualification, and others thinking that expanding accountability to include destination would be a more appropriate measure.

The groups also decided accountability should be about improving and progress, with one group suggesting schools should be held accountable to their own set of broad aims, which could include assessment but crucially other measures as well.

The event was the first of a series of three events to be held over the summer and autumn. The next two events are on the themes of Assessment in Other Sectors, and 21st Century Assessment. If you wish to attend either of these events and contribute to the project, further details can be found on this blog post.

Our thanks go to the speakers and guests who attended.

Phase 2: Developing Policy

20/06/2014Over the last few months, AQA has asked UK and international experts, teachers, academics, policymakers and employers what they think the future of assessment could and should look like over the next 10-15 years. This collaborative project has so far taken the form of blogs and videos by key thinkers in the sector, as well as roundtable events and debates involving teachers and school leaders.

From these “blue skies” discussions, we have identified three themes which were repeatedly raised as being the most important areas for development and reform over the next 10-15 years. The second phase of the project will be a series of three policy development workshops. Teachers, policymakers and other stakeholders are invited to join us to discuss each of the themes and formulate policy objectives in each area.

The three areas we will be focusing on are:

Balancing assessment and accountability – Tuesday 8th July
How can we achieve good assessment and good accountability at the same time?

Speakers:
Tom Sherrington, Headteacher, King Edward VI Grammar School, Chelmsford
Margaret Miller and Laura Dougan, Policy Managers, Scottish Qualifications Authority

Assessment for the real world – Wednesday 10th September
How can vocational education and training be reliably and validly assessed?

Speakers
Professor Prue Huddleston, Fellow and formerly Director of the Centre for Education and Industry, University of Warwick
Professor Jen Cleland, John Simpson Chair of Medical Education, University of Aberdeen

21st century assessment – Wednesday 15th October
How can technology make assessment more relevant, practical and valid?

Speakers:
Professor Angela McFarlane, Assessment Co-Chair, Education Technology Action Group
Lisa Gray, Assessment and Feedback Programme Manager, Jisc

We will invite expert speakers to provoke debate, and the events will be open and interactive, with everyone having a chance to discuss their thoughts and develop more detailed thinking in smaller groups. The aim of these events is to create three long-term, achievable, policy objectives for each of the themes, which we’ll publish here following the workshops.

The three workshops will be held from 6pm to 8.30pm in central London, and a buffet supper will be provided. If you would like to register interest in attending please email jwilson@aqa.org.uk stating which workshop(s) you wish to participate in. We look forward to seeing you there – you can also follow the debate at www.aqa.org.uk/assessment2025 and on Twitter #assessment2025.

Can and should young people play a role in designing assessments?

 

09/06/2014Professor Jannette Elwood, Professor of Education, Queen’s University Belfast

At the time of writing this blog, students are once again in the thick of the examination season, having to navigate one of the most pressurized times in their educational lives. So while they are sitting up to 2 examinations a day and maybe 4 or 5 examinations a week for the next while – I think it is a very apt time to consider whether they can and should play a role in designing the assessments that they sit.

My short answer to the above question is yes! Young people can and should play a role, not only in how assessments are designed but also in how policy around assessment change is debated, consulted upon and final decisions made.

My answer to the question is based on two factors. Within international legislation – the UN Convention on the Rights of the Child (UNCRC), to which the government of the day is a signatory – children and young people are classified as rights holders and as such are recognized as being entitled to engage in processes that affect them directlyincluding the development of policies and services (in this instance educational ones) through research and consultation (Elwood and Lundy 2010). And, of course, they are key stakeholders in any future assessment systems and, as such, are as important as any other group of people (assessment experts, teachers, head teachers, parents, employers, government) to have a say about what assessment and qualifications systems will look like and entail.

We know at present the landscape of qualification reform in England, Wales and Northern Ireland is one of uncertainty and fragmentation. What is clear is that proposed changes to GCSEs and A levels across the nations of the UK will have major (and differential) ramifications for young people and their future educational and employment careers. Likewise, we are seeing major changes to assessment systems for lower secondary and primary school children and have yet to see the full impact of international testing regimes (such as PISA) and their effect on the educational experiences generally of children across all ages. Yet what is also clear is that there has been a limited history of children’s and young people’s participation in the area of assessment generally and relatively none within policy formation or qualification development. Their input is decidedly missing from any meaningful engagement about the current round of proposals and decisions concerning qualifications reform. This, I would argue, is a significantly missed opportunity to see the bigger picture – the full extent of assessment policy change and its impact.

How do we know that young people have something worthwhile to tell us about assessment reform? Well, we are beginning to know a great deal more about what they think about education generally and from national research that has engaged with young people on these issues (Elwood 2012 and 2013), we are beginning to know a great deal more about what they think, not only about qualification reform, but also about not being consulted about such significant and high level debates. This lack of participation in any decision making on these issues begins to make young people feel more like victims of assessment policy reforms rather than beneficiaries.

So what do they have to say? In talking to nearly 250 young people from across England (Elwood 2012), they told us that yes examinations do dominate their lives in school but that they enjoy them if they are well prepared and know that they won’t get far without good grades. They also thought that examinations structured through modules (and re-sits) allows for any mistakes to be made better, takes the stress off having to do everything in one sitting and that it is only fair to all young people to have a mixture of examinations and coursework – ‘we don’t all like the same things.’ However they felt there was too much confusion about the ‘worth’ of all the different types of qualifications offered and that actual grades were being devalued – with A* taking over as the status of excellence, but not everyone could achieve that. They also felt insulted at the annual circus of the ‘standards are falling’ debates feeling their achievements were degraded; obtaining good grades in whatever qualifications they sit is tough and not getting any easier. They also wanted to know why changes to examinations are introduced ‘live’, i.e. in to their examinations sessions, where their future successes might be ‘messed-up’ if these changes haven’t been piloted in advance; such changes can have considerable impact on their final grades and that is too high a price to pay.

Thus young people have a lot to tell us about both the positive and negative impact of assessments on their lives – we should be listening to them more on these matters. So how might policy makers and assessment developers engage with young people more effectively? There are a number of ways in which this can happen such as: focused policy briefings with education officials and young people in order to obtain input into current debates; examination boards actively setting up panels with young people so that their views can be fed directly into assessment design and implementation; and strategies for consultation involving social media that speak directly to students to gauge their opinions. Good practice, however, and especially rights-based approaches – would mean that we would ask young people what they think is the most effective way to engage with them directly and then change our practice accordingly.

Including young people in decision-making about the future of assessment design or implementation will not be easy; nor will it provide the answer to all the problems that they and we, as assessment professionals, face. But they are authoritative on these matters, just like us, and have a great deal to offer in terms of how they see the world, what they see as salient in their education, what will be of benefit to them and how such future assessment systems could and should be developed with a consideration of their best interests firmly centre-stage.

References

Elwood J and Lundy L (2010) Revisioning assessment through a children’s rights approach: implications for policy, process and practice’, Research Papers in Education, 25: 3, 335 — 353. http://www.tandfonline.com/doi/abs/10.1080/.U42eghb9Mtg

Elwood J (2013) The role(s) of student voice in 14-19 education policy reform: reflections from students on what theyare, and what they are not, consulted about, London Review of Education, 11(2), 97-111.
http://www.tandfonline.com/doi/abs/10.1080/.U42exRb9Mtg

Elwood J (2012) Qualifications, examinations and assessment: perspectives and views of students in the 14-19phase on policy and practice, Cambridge Journal of Education, 42: 4, 497-512.http://www.tandfonline.com/doi/abs/10.1080/.U42eUhb9Mtg

Can we have better descriptions of performance in our examinations?

18/03/2014Dr Chris Wheadon, Director, No More Marking Ltd.

20… 19… 18… As the British Military fitness instructor counted down my press-ups from 20 to 1, I wondered why on earth I was doing press-ups in the mud early one Saturday morning. His answer was simple: to get better at doing press-ups. Then he gave me another 20 for being cheeky.

We’ve all had this experience as teachers. Why do we have to do this sir? Is it in the test? If the answer is yes, then you can get them to do it sitting upside down in a swimming pool, if that’s how the test will be run. Because by doing whatever it is, they will get better at it, and will do better in the test. We do press-ups to get better at doing press-ups. We do tests to get better at doing tests.

Some users of tests, however, seem to want to know more than our final score or grade or, in my case, the number of press-ups I can do. What does a maths test mean in terms of how good I am at mathematics? Or at doing the calculations I need to design a bridge that won’t collapse? In my case, what does my ability to do press-ups mean in terms of my ability to haul shopping bags from supermarket to home? Already, at this point in my thinking, I can hear my fitness instructor: ‘If you want to get better at hauling shopping bags, start hauling shopping bags…’

International and national assessments tend to begin with grand statements that set up certain expectations. It seems they all aspire to tell us what students know and can do. PISA, the NAEP, GCSEs, all of them aspire to this. Well that is great news! As an employer, can you tell me at what PISA score students will be able to write a letter to a customer inviting him to participate in a research study? As I skip through the first of PISA’s latest 555 page report, I quickly realise I’m in trouble. PISA appears to be able to tell me about mathematics and reading, but not about writing. Not to mention the fact that kids don’t get a score, they get a ‘plausible value.’

Undeterred, I turn to the GCSE. Within 5 minutes I have pulled up the grade descriptors for AQA’s GCSE English grade C:

Candidates’ writing shows successful adaptation of form and style to different tasks and for various purposes. They use a range of sentence structures and varied vocabulary to create different effects and engage the reader’s interest. Paragraphing is used effectively to make the sequence of events or development of ideas coherent and clear to the reader. Sentence structures are varied and sometimes bold; punctuation and spelling are accurate.

Wow! And that is just at grade C! Adaptation! Engaging writing! Bold sentence structure! Imagine what you get at grade A! So if I employ a young person with a grade C in English, can I really expect accurate spelling and punctuation? Before you dismiss such descriptions as some form of dumbing down conspiracy, ask around any English teachers you know. A good teacher will confidently reel off the assessment objectives of the GCSE and A-level syllabus, and will tell you the relative strengths and weaknesses of a piece of writing in terms of those assessment objectives, and in terms of the grades you can expect. At some point they will say, the punctuation and spelling are not grade C level…

So, given the wealth of descriptive data we have around examinations – the criteria, the levels, the grade descriptors, the domains, the constructs – do we really need any more detail? Personally I would add very little. Firstly, returning to the grade descriptor I would simply change the first sentence:

Candidates’ writing against the GCSE English task we set them under examination conditions shows successful adaptation of form and style to different tasks and for various purposes.

I would add this qualification because I know that the candidates will have been prepared for the task in hand, have memorised strategies, mark schemes and model answers, and there is little that can be done to improve how much further we can generalise from their answers. Unfortunately, however good we get at tests, our performance is limited to some extent by the conditions under which those tests are taken.

Secondly, I would make some examination scripts available to all after the examinations. I have a feeling that we may disagree about what constitutes bold and engaging writing. Disagreement on standards is a healthy and necessary debate, which is made healthier by the presence of a good sample of evidence.

Anyway, I must get back to my press-ups. I’ve got a test coming up. Of press-ups. Just don’t ask me to haul any shopping bags, that’s not what I’m working on right now.

Dr Chris Wheadon is Director of No More Marking Ltd. (www.nomoremarking.com)

Could technology render external assessment irrelevant?

18/02/2014John Ingram, Managing Director, RM Assessment & Data

“If I had asked people what they wanted, they would have said faster horses.”  So, reputedly, said Henry Ford on the topic of innovation. Regardless of the quote’s authenticity, it’s a useful reminder to step outside the norm from time to time and wonder what a bolt from the blue would do to our day-to-day existence.

Technology has already streamlined our assessment processes. According to Ofqual, onscreen marking is now the main type of marking for general qualifications in the UK. Onscreen marking involves scanning exam papers and digitally distributing them to examiners to mark using specialist software. In 2012 66% of nearly 16 million exam scripts were marked this way in England, Wales and Northern Ireland. Onscreen marking is also gaining in popularity in other territories: RM’s onscreen marking system has been used by awarding organisations in Eastern Europe, North America, Asia and Australasia.

As well as reducing the time and risk involved in transporting exam papers to and fro, onscreen marking improves reliability by automatically adding up the marks. Teams of examiners can be monitored in real time, with the system stopping under-performing markers from marking further questions.

On the whole, however, onscreen marking is just a smarter way of assessing hand-written exams. The fact that it can also be used to mark computer-based tests, coursework and audio-visual files is becoming less relevant in a country such as England where the emphasis is on linear assessment and paper-based exams, at least where school exams are concerned.

Let’s call onscreen marking of exams ‘faster horses’, then; it’s better than marking by hand but it doesn’t revolutionise the way we evaluate learning. So what’s the ‘motorcar’? Tests taken on computer? Countries such as Denmark and Norway have introduced computer-based testing for national exams. The next round of PISA tests in 2015 will be taken on computer. Moving from paper to computers does feel like progress – until you look around you.

The world has moved on to tablets, smartphones and – those clunky phrases – the ‘internet of things’ and ‘the internet of customers’. Which could mean that while we polish our current system to its highest possible sparkle, waiting in the wings is a disruptor which will render it irrelevant.

It’s perhaps natural that in education, where the stakes are so high, there can be fear of technology. There’s a worry that hi-tech can mean low quality – quicker, shorter, and more superficial assessment. But that needn’t be the case.

We’re already seeing glimmers of new ways of experiencing and demonstrating learning. Open badges add context to academic achievement. MOOCs offer access to expertise from all over the world. There will always be a place for face-to-face teaching and core subjects, but the way we learn is becoming broader, more granular, more accessible. With digitisation comes the expectation of immediacy: on-demand exams, instant results, instant certificates to share online.

For education to exploit technology for our children’s benefit, we need to learn from other fields. So far this year we’ve seen babygrows that monitor temperature and breathing. Contact lenses that measure glucose levels. Even toothbrushes that tell tales to your dentist when you’ve been less than thorough. It isn’t too much of a stretch to imagine multiple data streams which continually monitor a student’s development and trigger a feedback loop to help them gain the required level of attainment. Meaning a one-off, external exam is rendered unnecessary. Will it happen by 2025?  To answer that with any certainty I’d need to ditch my smartphone and dig out the crystal ball.

What can we learn from other uses of technology like flight simulators?

28/01/2014Gareth Mills, Trustee, Futurelab, and Member, 21st Century Learning Alliance

Technology enhances human capability. It always has done. The telescope allowed us to see further and the microscope helped us to look closer. Coupled with our incredible human capacity to imagine, technological tools have helped to unlock the wonders of the universe and the secrets of our genetic make-up. The history of mankind is a story of ingenuity in the use of tools to solve problems and create new possibilities.

It is surprising, given the transformations seen in many other professions, that so little of genuine significance has been done to exploit technology in the field of educational assessment. What has happened is the automation of many of the easy-to-automate processes of traditional assessment. This includes the marking of multiple-choice questions and the crunching and analysis of big data. The application of technology has tended to serve the needs of administrative efficiency rather than trigger genuine transformation.

Without undermining what has been achieved to date we might, by 2025, seek to harness technology to do more significant things.

So how might we use technology more imaginatively to see further and look closer? Let’s consider just three examples.

Even traditionalists tend to agree that sitting students in a hall to take pencil and paper tests is, at best, a proxy for something else we value much more. Whether students head for university or the world of work, employers and lecturers will value their capacity to manage themselves, show initiative, undertake research, think critically and creatively, work collaboratively and have good interpersonal skills. Employers also say that they look for qualities such as determination, optimism and emotional intelligence alongside competency in literacy and numeracy.

Modern conceptions of competency for future success in life include a wider set of attributes than can generally be found in the mark schemes of most GCSEs. Being fit for the future goes way beyond what can be captured adequately within three hours in an exam hall.

By 2025, one thing we should have explored is the use of scenarios and immersive environments in assessment. No doubt, some traditionalists will baulk at the suggestion; however, most of us feel reassured that the pilot flying our holiday jet has made good use of a flight simulator.  It is reassuring to know that the person at the controls has learned about the handling characteristics of the aircraft, practised how to deal with unusual weather conditions or mechanical failures and rehearsed landing at the world’s most difficult airports in a virtual environment. Immersive environments help to strengthen the authenticity of learning, they are dynamic enough to respond to the user and are able to test capability in many different contexts.

In medicine, the military and the health and safety industries we are seeing a growth in the use of virtual environments to support learning. We can find examples in education too, however, nothing has yet made it into the mainstream or challenged the hegemony of traditional tests.

Is it too far fetched to imagine that by 2025 education assessment might be making use of rich on-screen scenarios to support learning and assessment? Shouldn’t we be using our ingenuity to make assessment more authentic, dynamic and contextually situated? As I write, however, policymakers seem to be marching in the opposite direction.

By 2025 we should also have made significant progress in the use of existing technology in assessment situations. How about, for example, the use of internet-enabled laptops in the exam hall? In Denmark they were piloting such initiatives years ago.  With a set of challenging tasks and tracking software the skills of searching, selection, synthesis, analyses, argument and presentation can all be evaluated alongside the application of knowledge. Such an approach would better reflect the way many will be expected to work in real life. We use tools, not to cheat, but as a way to increase our capacity for critical and creative thought.

By 2025 we will have also taken some technology-enabled assessments to scale. When and how did you take the theory section of your driving test? Since the early 2000’s candidates have taken an online test and a screen-based hazard perception test, involving video clips and touch sensitive surfaces. Of course, a hands-on practical driving test is also required before successful candidates are let loose on the roads.  It seems like a well-balanced assessment to me – knowledge recall, perception testing and practical applied skills. Importantly, no one feels cheated because everyone doesn’t sit the on-line test nor drive along the same roads on the same day.

Perhaps in 2025 we might have more well-balanced, when-ready assessments rather than the set piece, once-a-year, no re-sits culture that drives assessment at the moment. If we can get technology assessment to scale in an important arena like driving, why not in others?

Despite media reports to the contrary, the UK has for many years been highly regarded for the quality of its public education and it is, consequently, a major exporter of educational services and assessments. I fear that by allowing our system to ossify, by not keeping pace with innovation we are in danger of missing a golden opportunity. As a country we need to be investing far more in R&D and developing new products and services to support high quality learning and assessment. We should seek to become the ‘silicon valley’ of technology-enabled learning.

Technology itself, of course, is not a silver bullet. Like all tools it is neutral. We can use a hammer to build or destroy. It is how we choose to use the tool that matters. We need to be at the leading edge in nurturing young people to develop the capacities they will need to flourish in life and work in the future. One way to do this will be through the use of technology coupled with, of course, that enduring human attribute… ingenuity.