Categories
Assessment and feedback

Assessment & Feedback – from reluctance to emotional response

At the recent JISC Assessment & Feedback programme meeting, I ran a session with the Strand B projects in which we revisited some questions we first discussed a year ago. Thus, instead of ‘What do you want to happen as a result of your project’s assessment and feedback innovation?’ we talked about what has happened. And, rather than ‘How will you know the intended outcomes have been achieved?’ we discussed the indicators and evidence that projects have actually gathered over the last year. These are particularly relevant given that Strand B projects are all about Evidence and Evaluation of assessment and feedback related innovations.

The questions were really just to get us started, although the Strand B project teams are such a keen group they didn’t need much encouragement! In fact, we had a very open discussion, and what emerged were some of the issues and benefits of evaluating large-scale changes in assessment and feedback using technology, as well as some interesting findings.

All the project teams want to gather a balanced view of the changes being implemented within their institutions, but many had issues with collecting data from ‘reluctant users’. In other words, individuals who are reluctant to use a given technology can also be difficult to involve in the evaluation process. This is by no means unique to this context, or to evaluation. Indeed, some projects found that reluctant users also tended to be less likely to take up training opportunities, something that might only be picked up later, when difficulties with using the technology arose. This really underpins Ros Smith’s reflections from the programme meeting on the need to open a dialogue with course teams, so that implementing these kinds of changes is as much about working with people and cultures as with technology. Being ready to capture the views of those who are having difficulties, or offering a light touch evaluation alternative for reluctant users might be options that provide a more balanced stakeholder perspective.

For some projects, the evaluation process itself had provided the push for lecturers to engage with online assessment and feedback tools. In one case, a lecturer who had previously noted that ‘my students don’t want me to use this approach’ took part in a focus group. During this, the lecturer heard direct from students that they did want to use online tools for assessment. Needless to say the project team were delighted that the lecturer went on to trial the tools.

Effective training of staff was also picked up as essential, particularly as how lecturers go on to communicate use of tools to students influences student uptake and use. This led on to discussions about the importance of training students, and how evaluation activity can help in understanding how well students interpret feedback. Essentially ensuring that students are gaining the most from the feedback process itself and not having difficulties with the tools used to support the process.

What surprised a number of projects was how the evaluations had picked up strong emotional reactions to assessment and feedback both from students and staff. There is a wider literature that looks at “Assessment as an ’emotional practice’” (Steinberg, 2008) and this is underpinned by studies into the links between learning identities, power and social relationships (such as this paper by Higgins, 2000). While the Strand B projects might not have set out to study emotional reactions, it seems there will be some interesting findings in this area.

The importance of relationships was also reflected in findings of a mismatch between students and lecturers in terms of perceptions of the intimacy afforded by online and hard copy assessment and feedback. Staff felt closer to students and more in a dialogue with them when marking hard copy. They wanted to sign or add a personal note to a physical piece of paper. While students felt more enabled to engage in a dialogue online, perhaps because this was felt to be less intimidating.

During the meeting we also discussed the methods and tools projects have been using for their evaluations, but that will be the subject of another blog post.

*Amended from a post on the Inspire Research blog*

Leave a Reply

Your email address will not be published. Required fields are marked *