Categories
Electronic management of assessment

Online exams: migration or transformation?

Our latest blog post is a guest contribution by Stuart Allen who has just completed an MSc in Digital Education at the University of Edinburgh. Stuart has been undertaking research into online exams. This is a short reflection on his research and Stuart will be joining us for a webinar on the topic in September …

Our latest blog post is a guest contribution by Stuart Allen who has just completed an MSc in Digital Education at the University of Edinburgh. Stuart has been undertaking research into online exams. This is a short reflection on his research and Stuart will be joining us for a webinar on the topic in September so watch this space for further details.

 

I’ve been interested in exams and how they might relate to learning ever since my undergraduate days, when my degree was decided by nine finals in the space of ten days. (I still have nightmares about them…)

So when I was thinking of a topic for my (MSc in digital education) dissertation research, I wondered how useful digital technologies might be in final, high-stakes exams. As I read more, I discovered that the published literature on the specifically educational (as opposed to administrative or technical) implications of online exams was actually very small. (Myyry and Joutsenvirta (2015) is an interesting place to start.)

I found that the use of online exams often follows one of two main approaches:

  • migration (transposing traditional exams to digital environments in order to achieve organisational gains, e.g. improved efficiency), and
  • transformation (using digital technologies as a catalyst to redefine summative assessment and align it with the perceived needs of contemporary students).

My main focus was on how the migration and transformation approaches translated into educational practice in particular contexts. I interviewed eight higher-education staff involved in designing, developing and delivering online exams across four countries. They talked at length about their experiences, beliefs, aspirations and frustrations.

Instead of finding one approach to be better than the other, I concluded that both the migration and transformation approaches had significant shortcomings. The migration view seems to assume that online exam environments are instruments that we can use to achieve pre-ordained aims (such as improved efficiency); however, in my interviews I found examples of technologies interacting with, and having significant implications for, educational practice. The sociomaterial perspective was very useful here (see Bayne 2015 and Hannon 2013).

I also found the transformation view to be problematic in its own ways. For instance I began to question the validity of claims that online exams are a logical response to society’s changing needs, and to suggest that a more detailed understanding of the ways in which online exams might be qualitatively different to traditional exams is required.

Moreover, I discovered a potentially hazardous assumption that traditional exams could be migrated online (or be ‘a little bit digitalised’, to borrow one interviewee’s expression) as a prelude to more ambitious and educationally motivated changes further down the line. This transition appears not to be as straightforward as some might believe, and the migration stage often requires practitioners to overcome challenges that are unexpectedly time-consuming and financially draining.

One of the things I found most interesting was the apparent strength of some university professionals’ conviction that online exams must comply with exactly the same conditions – in terms of invigilation, the types of questions asked and candidates’ access to course materials, notes etc – as traditional pen-and-paper tests. To a large extent these assumptions set the tone for how the participants in my research used online exams.

With this in mind, I produced a number of questions that practitioners working with online exams might wish to consider:

  • In your institution, what motivations exist for pursuing online exams, understood particularly in terms of how educational goals are defined at institutional and programme-specific levels?
  • What assumptions are being made about what is meant by an ‘online exam’ within your context, and what can be done to support a constructive dialogue around these?
  • To what extent does the dialogue between educational practice and the material contexts of particular digital environments result in online exams that are qualitatively different from traditional tests? For example, do online exams actively support, alter or proscribe particular types of student responses?
  • In what ways might online exams be used to support increased assessment authenticity, in terms of both the context and content of examination tasks?

Lastly, I’d argue that the term ‘online exam’ itself – and all the assumptions about technology, education and assessment that seem to underpin it – might constrain the potential for developing practice to an unacceptable degree (see Gillespie 2010). Do we need to invent a new term to describe the summative assessment activities of the future? If so, what might that term be?

 

Recommended reading

Bayne S. (2015) ‘What’s the matter with “technology-enhanced learning”?’, Learning, Media and Technology, 40 (1), pp. 5–20.

Gillespie T. (2010) ‘The politics of “platforms”’, New Media and Society, 12 (3), pp. 347–364.

Hannon J. (2013) ‘Incommensurate practices: sociomaterial entanglements of learning technology implementation’, Journal of Computer Assisted Learning, 29, pp. 168178.

Myyry L. and Joutsenvirta T. (2015) ‘Open-book, open-web online examinations: developing examination practices to support university students’ learning and self-efficacy’, Active Learning in Higher Education, 16 (2), pp. 119–132.

 

Link to dissertation abstract: https://stuartallanblog.wordpress.com/dissertation-abstract/

Find me on Twitter: https://twitter.com/GeeWhizzTime