Categories
Lifecycle

1. Specifying

Specifying details of a significant course or programme of study (and consequently specifying the assessment strategy within it) is done infrequently. For new courses there may be a considerable time lag (of 1-2 years) between course validation and initial delivery. Often this means that the course is delivered by new staff who have little ownership […]

Specifying details of a significant course or programme of study (and consequently specifying the assessment strategy within it) is done infrequently. For new courses there may be a considerable time lag (of 1-2 years) between course validation and initial delivery. Often this means that the course is delivered by new staff who have little ownership of the original design and changes are inevitable. Once a course has passed validation, significant review might take place as infrequently as once every six years. There are due quality processes to manage changes in the interim but staff often find the processes so arduous that they find ways to implement change ‘under the radar’ of the formal minor modifications processes. All of these factors, coupled with the amount of information that is still paper-based in many institutions, mean that it can be difficult to generate accurate information that flows right through the life-cycle and is readily reusable for a variety of different purposes and different stakeholders. These issues were investigated in the Jisc Curriculum Design programme and the outcomes also fed into the Jisc infoKit on Managing Course Information (see particularly the section: ‘Why is managing course information difficult?‘) Clarity around the specification stage is extremely important in ensuring that actual assessment practice does really assess against the desired learning outcomes.

Participants in the Think Tank noted that, in both stages 1 and 2, there is a need to support more creative pedagogic thinking if we are not to keep going round the life-cycle in a very traditional and formulaic way. The Jisc curriculum design programme noted the intuitive and iterative nature of learning design and the fact that many of the most significant design decisions take place in the ‘gaps’ in the formal process (i.e. the periods between formal review points) and there is a need to find ways of recording and sharing of this thinking. We need to be able to show that investment in better learning design means that students need less support later on (effective assignment briefs, well understood marking rubrics, formative opportunities and peer review can all contribute to better self-directed learning). There is also a need for constructive alignment to ensure that the assessment tasks clearly enable the learning outcomes to be demonstrated.

‘Trying to change the culture of moving feedback earlier in the learning cycle is a key challenge.’

The specifying stage of the life-cycle causes a different set of problems in FE due to the complexity of awarding body criteria for assessing against particular learning outcomes and the frequency with which the specifications can change.

‘The volatility of assessment schemes for qualifications causes problems for developers of technical solutions for managing assessment tracking and award prediction.’

Encouraging the use of a broader range of assessment types is seen as an important means of enhancing learning and teaching practice by many institutions. In this context e-submission, despite its many advantages, was seen by some to be a double edged sword because examples were cited whereby academics were constrained by the limited range of file types that lend themselves to online submission, feedback and marking and in some cases the introduction of e-submission had resulted in regression to a more conservative range of assessment types. Although it was noted that creativity had again increased once the range of acceptable file types was increased within the Turnitin system.

‘Academic staff, understandably, don’t want their assessments to be driven entirely by what the technology can offer, but want the technology to be able to respond to the assessment requirements.’

Cultural factors also come into play and the likelihood of eliciting disapproval from external examiners has also been cited as a reason for risk aversion in the setting of assignments. Others have made the point that curriculum (including assessment) design is the responsibility of the awarding institution and that external examiners have the right to challenge how the methodology is implemented but not the methodology itself. Risk aversion in relation to assessment practice is a general issue but one that seems to be exacerbated rather than alleviated by EMA.

Keele University undertook a project to support more innovative assessment practice and the outcomes of 20 different innovation projects are evaluated on the project STAF website.

There is a discussion below on ensuring fairness (including elimination of any unconscious bias) in the marking process but there is also a need to take account of the fact that a correlation between differences in marks relating to factors such as gender might relate as much to assessment design as to the actual marking process.

MMU guidance on Specifying.