Categories
Blackboard Challenges/Solutions Electronic management of assessment Moodle Pebblepad Product Features/Comparisons Turnitin/Grademark

Outcomes of the feedback hub solution development

Moving the management of assessment online has opened up new possibilities and expectations around the role of assessment in learning. One of these is the use of holistic feedback; feedback that comes from across the years of a degree programme, and not just the modules a learner is enrolled in at the moment. History: the […]

Moving the management of assessment online has opened up new possibilities and expectations around the role of assessment in learning. One of these is the use of holistic feedback; feedback that comes from across the years of a degree programme, and not just the modules a learner is enrolled in at the moment.

History: the solution workshops

You could call a system feature that offers such functionality a ‘feedback hub’, and when Jisc worked with the sector to identify unmet needs in the EMA area, it was clear that this was one of them. With a feedback hub, it becomes possible for students and their tutors and other educators to identify strengths and weaknesses in their learning and focus feedback on them over more than one assessment cycle.

The features of feedback hubs

In order to provide holistic feedback, feedback hubs ought to have a particular set of features. Over the course of two workshops, colleagues from a range of providers defined a set of those. Following a further round of whittling and prioritising, the following ten features emerged, ranging from crucial to nice to have:

  • aggregate feedback from all modules/years and systems
  • aggregate grades as well as feedback
  • provide different views for staff and students
  • facilitate dialogue about the feedback
  • facilitate dialogue about the feedback in stages (after the student has responded to questions, for example)
  • provide different views structured by different programme wide learning outcomes
  • add feedback manually at any time
  • provide access to comprehensive feedback offline
  • allow students to rate feedback
  • integrate with learning analytics solutions

Of the first defining feature, the need to aggregate feedback from across different systems, is at least as important as the ability to aggregate over time, and probably more difficult. Some institutions manage to do all assessment in one system, but the majority use at least a VLE and an assessment service such as Turnitin, and usually a student record system as well. Some also have an assessment management system such as co-tutor in the mix.

Current state of development

Feedback hubs are a relatively new phenomenon, and, as explored in an earlier post, there’s quite a variety of systems that provide the functionality as a result. Progress has been remarkable in the course of a year, and users of systems such as the MyFeedback Moodle plugin, Pebblepad’s Flourish or the Brightspace VLE can start developing holistic feedback practice this academic year.

Others are not far behind, though much still depends on system integration. Fortunately, the IMS Assignment work that was also reported on earlier should help make it much easier to exchange assignments and information about assignments.

The potentially trickier question is how to develop practices in using feedback hubs as part of business-as-usual assessment processes. The issue is a classic Catch 22: vendors would like to know what users really use, but the users don’t know what they need until they’ve tried it. Nonetheless, with the first institutions going live this year and the next, we should be able to learn from practical experience soon.

Possible interventions

We explored in detail the potential options for Jisc to add value in this
space. These can be summarised as:

  • Addressing the ease of access to feedback. The main technical barrier identified by vendors and institutions alike for the provision of holistic feedback was the difficulty of gathering assignments and feedback from the range of systems where feedback is typically held. To this end, one option would be to work with the recently set up IMS LTI Assignment Taskforce, which is working to develop a common specification that should ease the movement of assignments or information about assignments from one system to another.
  • Partnering with existing vendors in the development of software, or exploring the development of new tools
  • Supporting a forum where specialists from learning providers and vendors could come together to share early priorities and findings to guide software development at first, and good practice later. Such a forum could facilitate the development of software and practice in tandem, and could not only break through the requirements catch 22 by speeding up the development of common holistic feedback practice, but also provide a means of testing software and software integrations in a variety of contexts
  • Finally, ways of bootstrapping feedback hub functionality by organising and co-sponsoring crowd funded features were also explored.

Outcome

After a full review of the options, it was agreed to focus on option 1 – to address the key issue of providing easier access to feedback from across systems. We will look at the models of marking that have already been developed, and will specify them at a detailed data model and protocol level. These will be available to anyone to take up and use and shared with the IMS LTI Assignment task force. That means that assessment workflows that are common in the UK will be used to inform the design of the protocol that systems such as VLEs and assessment services use to integrate with each other. At the same time, the vendors in the IMS group will get a detailed and relevant view of real-life use cases to guide the design of the new specification.

What this also means is that assessment processes can be examined from the high level of the assessment life cycle, to the 10 step EMA process, to the particular workflows of the 5 models of marking, to data model and protocol level choreographies of which bits of data get exchanged between systems and in what order. We’ll have to see how many of the marking models can be covered in this way, because each of them do sum up quite a lot of variability. The aim of the detailed models will be to inform not just the IMS work, but also system developers as well as those who need to configure, integrate and maintain these system in institutions.

In addition, we will take the feedback hub features identified above and bundle them into a general EMA system requirements template that vendors can use to describe the capabilities of their products (which is already in train as part of the wider EMA project). That way, we can surface how systems are currently meeting these demands for a holistic view of feedback in the context of wider EMA capabilities.

Due to the rapid development of feedback hub software, the development of new tools, or working in partnership with existing vendors on the development of tools weren’t seen as viable options. It was also felt that the development of a community of practice, although potentially useful, wasn’t something that Jisc is currently best placed to take forward.

We are very much looking forward to seeing how the development and use of these tools in universities and colleges emerge and grow over the coming months and years, please get in touch to share your stories.

List of hubs

As part of the feedback hub research, we’ve had a look at the following systems:

D2L Brightspace
Instructure Canvas
MyFeedback Moodle plugin
University of York BB plugin
The Bedford College Moodle Grade Tracker
TurnItIn
PebblePad Flourish
Co-Tutor
Edinburgh College of Art assessment and feedback system
University of Portsmouth Moodle – TII integration
Anglia Ruskin University Sharepoint – TII integration

We’re particularly grateful for them and the wider community’s contributions to the research.