16.1 Programmes will have in place and operate marking and moderation processes that ensure the reliability, consistency, and accuracy of marking , in line with the expectations set out in this section. Such processes may be organised at a school or faculty level.
16.2 The marking and moderation processes should be made available to students, for example in the programme handbook.
Marking practices
16.3 Single marking is where student work is marked by one individual based on a marking scheme. Moderation must take place on individual assessments with single marking subject to exemptions set out in (16.5) below.
16.4 Double (or ‘second’) blind marking is the process by which an assessment is marked independently by two markers, who then agree a final mark (or marks). Neither marker is aware of the other’s assessment decision in formulating their own mark. Moderation is not required for work that is double marked as the double marking effectively takes the place of moderation. Double marking will normally take place for:
i. Dissertations and end of programme projects or equivalent (in which a dissertation supervisor may only be permitted to be an internal examiner as part of a marking team);
ii. Where there are particular difficulties in applying moderation given the nature of the assessment (e.g. a live practical assessment that is not recorded);
iii. Work marked by non-academic staff (depending on the experience of the markers) or inexperienced markers;
iv. Where this is required by a professional, statutory or regulatory body.
16.5 The practice of one marker seeing the marking of another marker (non-blind) is deemed to be a form of moderation.
16.6 Where there is more than one marker for a particular assessment task, schools should take steps to ensure consistency of marking. Programme specific assessment criteria must be precise enough to ensure consistency of marking across candidates and markers, compatible with a proper exercise of academic judgement on the part of individual markers.
16.7 Markers are encouraged to use pro forma in order to show how they have arrived at their decision. Comments provided on pro forma should help candidates, internal markers and moderators and external examiners to understand why a particular mark has been awarded. Schools should agree, in advance of the assessment, whether internal moderators have access to the pro forma / mark sheets completed by the first marker before or after they mark a candidate’s work.
16.8 The School Education Director is responsible for overseeing the allocation of marking, and the forms of marking used in programmes within their School.
Benchmarking
16.9 Benchmarking is a process to promote consistent standards among multiple markers of a specific assessment. It should be used in appropriate cases prior to marking and moderation.
16.10 In large units it is common to have multiple markers of an assessment. In such cases, the possibility arises of misalignment across markers even where markers have been individually consistent. To encourage collective consistency and reduce the need for re-marking of scripts, benchmarking should be used as an important part of the overall quality assurance process.
16.11 A typical benchmarking exercise could involve all markers individually marking the same small selection of randomly chosen scripts (e.g. 5 scripts) and then agreeing how marks should be allocated against the marking criteria to inform marking of the remaining scripts. The number of scripts selected for such benchmarking will depend on the nature of the assessment. For example, where optional questions exist, it may be necessary to select a higher number of scripts than usual to ensure all questions are discussed in the benchmarking exercise.
16.12 Benchmarking should take place before marking so should be arranged as soon as possible after an assessment has taken place. It is good practice to organise benchmarking meetings as part of the marking allocation within a school.
Calibration
16.13 Calibration is the process to promote consistency of standards between institutions, units or academic years.
16.14 Some assessment types call for academics’ individual expert judgements. Internal calibration helps markers across and within programmes to develop shared understanding of academic judgement across different assessments, units or academic years. The purpose of calibration is to enhance and share good academic practice amongst markers rather than ensuring consistent standards for a particular cohort of students.
16.15 Internal calibration exercises can take many forms but often involve a group of academics reviewing a small sample of anonymous student assessments before discussing the decision-making behind hypothetical marks and feedback. Unlike benchmarking, internal calibration exercises are not intended to agree a ‘correct’ mark or prepare teams for marking particular assessments. Nor are they best used to identify deviations from norms to be corrected. Rather, periodic internal calibration exercises help academics develop their individual judgement through knowledge of how other experts might approach a broadly similar scenario. In that sense, the use of internal calibration recognises that robust individual academic judgement arises from participation in a community of expert assessors who periodically reflect on their decision-making. Likewise good practice in feedback is encouraged and facilitated by reflecting on the marking of student work by other experts.
16.16 Faculties / schools should have processes in place that allow programme teams to develop a shared understanding of marking criteria and exercise their individual academic judgment with knowledge of how others might exercise that judgment in broadly similar scenarios.
Internal moderation
16.17 Summative assessment will normally be moderated. Exceptions are:
• Where the assessment contributes 10% or less to the unit mark
• Objective tests, such as multiple-choice questions.
16.18 The sample size for moderation should be adequate to provide assurance that the work has been properly marked across a range of student performance in the assessment for each marker. The following procedure is recommended to arrive at a representative sample:
a. sufficient standard ranges should be established across the marking scale from which the selection is to be made (for example the ranges could consist of fails, third class, 2:1, 2:2, first or the descriptor categories on the 0-20 marking scale);
b. a sliding scale corresponding to the number of assessments available for moderation should be employed; as a guide, a minimum of eight or 10% of the available assessments, whichever is greater, should be included in the sample. The sliding scale should then be adjusted according to:
i. the number of scripts available, so that the sampled proportion reduces as the number of available scripts rises; and
ii. the number of first markers for an assessment or component part of an assessment; the higher the number of first markers, the more assessments are moderated (to ensure adequate moderation across all markers).
c. Where the number of submitted pieces of assessment for the unit is seven or less then all the assessments should be subject to internal moderation.
The internal moderation of assessments that do not generate a numerical grade (i.e. pass/fail assessments) should focus upon those at the pass/fail border.
The marks of assessments that significantly contribute to determining progression within a programme or the award and classification of a qualification (e.g. a dissertation or project) should be carefully reviewed through the moderation process, if they are not double-marked.
16.19 The responsibilities for conducting internal moderation are:
16.20 Moderation should take place after the assessment has been marked and in advance of submission to the exam board, with reference to the University’s policy on providing feedback to students on their work. Where necessary, priority should be given to the timely release of feedback over the completion of the moderation process. In such cases, students should be informed of the status of the mark that has been released.
16.21 The role of the moderator is to form a view of the overall marking, not apply corrective marking to individual assessments. The moderator should produce a report, which should instigate a dialogue between the marker and moderator; the conclusions of which should be formally captured as part of an audit trail. The purpose of the audit trail is to provide the relevant examination boards, including the external examiner with a means to determine whether the marks are fairly awarded and are consistent with relevant academic standards and as evidence in the event of an appeal.
16.22 Moderators should review the marking of the individual marker/s against the relevant marking criteria within the sample and all the marks awarded to identify whether the marks awarded appropriately reflect the standard of work and whether there are any inconsistencies within the marking. A separate process should be in place to check that all questions in an assessment has been marked and that the marks are totalled correctly.
16.23 Specific outcomes arising from the moderation process are:
‘Mark adjustment’, as an outcome of moderation, is a legitimate and intended means of ensuring that marks are robust and fair. An adjustment may apply to an entire set of assessments or an identified sub-set. Adjustments should not be made to individual marks in isolation.
16.24 In cases where a moderator and marker cannot agree on a course of action, the batch of work should be referred to a second internal moderator (as identified by the School Education Director) for adjudication.
16.25 The relevant school board of examiners should be assured that moderation has occurred and action has been taken to assure the quality and standards of the marks presented to it.
16.26 Evidence of moderation should be made available to the external examiner for review, which may consist of samples of moderated assessment, a distribution of unit marks and the formal record of dialogue between markers and moderators. Internal examiners should consider and respond to any issues raised by the external examiner prior to the exam board wherever possible.
16.27 The School should review the operation of its policy on internal moderation for its programmes on an annual basis. The University Quality Team will investigate moderation practices and their implementation where there is cause for concern (e.g. if it is raised by an external examiner in their report).
16.28 Where coursework is assessed summatively, schools should have a system in place to ensure students’ work is available for moderation at a later date, by a means that ensures that the marked work is identical to that originally submitted.
16.29 Work assessed for summative purposes should be capable of being independently moderated and made available in case it needs to be moderated by the external examiner(s). It is recognised that second marking/moderation may present difficulties in some forms of summative assessment such as a class presentation. In these cases, evidence of how the assessment mark was reached should be preserved for moderation.
Scaling of marks
16.30 Scaling is not normally permitted, except in the following two circumstances:
a) Where the raw scores for the whole cohort are converted onto an appropriately distributed marking scale as part of the planned design of the assessment. The rationale and mechanism for scaling should be recorded in the unit specification and in the minutes of the relevant board of examiners.
b) Where the marks of a cohort of students are moderated post hoc due to an unintended distribution of marks. When an assessment or a question within an assessment has not performed as intended, scaling may be employed (in this instance the methodology will not have been planned beforehand). This should be an exceptional event. The rationale and mechanism for intended scaling should be recorded in the minutes of the school and faculty boards of examiners.
16.31 Before scaling is used, its use and the method that is intended to be employed must be agreed with the relevant Chair of the Faculty Board of Examiners, prior to application, and then approved by the relevant external examiners and the school and faculty boards of examiners.
16.32 The use of scaling must also be made transparent to students: in the case of (a), students must be informed of the way in which the raw scores are converted onto the marking scale prior to the assessment; whilst in the case of (b) students must be informed of the process after the assessment.