Assessment and feedback design to manage turnaround time - Helen Puntha
Management of turnaround time in the longer term may require structural changes to assessment regimes both at module and programme
level. This includes considering the number and range of assessments and whether they are a valuable use of staff and student
time in terms of the knowledge and competencies gained by the students. Lecturers’ frustrations over the time and effort spent
marking can be compounded by students not reading feedback comments so any structural changes will be most effective when
combined with a consideration of how students engage with feedback.
The School of Architecture, Design and the Built Environment (ADBE) conducted a school-wide review of assessment and feedback
practices with a view to encouraging a culture of targeted, meaningful feedback. The review focussed in particular on: the
appropriateness of the assessment criteria and grade descriptors, the number of credits awarded, student use of feedback,
resource management and workload planning. Feedback from students via staff-student liaison has been positive. For more on
the ADBE review see the CADQ resource on intrinsic feedback .
Some assessment and feedback design strategies which may help to manage turnaround time include:
Reducing the number of summative assessments whilst increasing the number of formative assessments. This can spread staff and student workloads more evenly allowing more
time for discussion, reflection and feedback on the remaining assessments. Gibbs notes that too much summative assessment
for marks and too little formative assessment for learning can result in a student focus on marks, not enough student effort,
not enough feedback and delay in feedback (Gibbs, 2010).
Reducing the number of words in assessments necessarily reduces the time it takes lecturers to mark. It may be worth considering whether the word limits for various
assessments are appropriate.
Limiting the range of assessment types within a programme could contribute towards shorter turnaround times by familiarising both staff and students with the assessment
types in use and thus reducing the need for detailed summative feedback particularly when combined with increased use of formative
feedback. In terms of pedagogical benefits limiting the range of assessment types can reduce student confusion particularly
and can have a positive effect on student learning when combined with an increased use of formative feedback and a reduced
range of learning outcomes and criteria. Gibbs and Dunbar-Goddet report that ‘it is traditional assessment methods, that emphasised
learning about goals and standards through frequent formative assessment and especially through oral feedback and prompt feedback,
and that had little summative assessment of a limited variety of kinds, that were found to be associated with positive student
learning responses, and with greater clarity of goals and standards’ (Gibbs and Dunbar-Goddet, 2007, p. 26). An alternative
view is that a variety of assessment types is necessary to ensure inclusivity as certain students may excel at different types
of assessment and that having a range of assessment types can encourage student interest and motivation (Rust, 2005). The
issue of range of assessment types therefore requires discussion ideally at programme level with any consideration of limiting
the range being weighed against the potential learning benefits of having a wider range of assessment types within the context
of a given programme.
Using different types of assessments might ensure that assessments are more aligned with the intended learning outcomes and depending on the assessment types
used, could save marking and feedback time. When combined with limiting the range of assessment types this could help ease
student confusion and thus contribute to deeper student engagement with assessment tasks. It might be for example that some
essays or long reports could be appropriately replaced with shorter reviews, articles, posters or summaries (Brown, Race and
Smith, 1997). The key question for consideration would be whether each assessment is truly assessing whatever it is intended
that the students learn. For more on aligning assessments with intended learning outcomes see the CADQ resource on constructive alignment.
Using assessment criteria sheets can enable quick initial feedback on routine assessment matters in lieu of or before more in-depth individual feedback is
given. Criteria sheets are forms which contain the assessment criteria with space for ticks, crosses, marks and comments (Brown,
Race and Smith, 1997). An example of an essay criteria sheet is available online from the University of Plymouth (University of Plymouth, 2009, p. 27). More guidance on assessment criteria sheets is available in the CADQ resource on Marking and moderation of text-based coursework.
Use of comment banks can enable quick feedback. Comment banks are stores of feedback comments (e.g. saved in a MS Word document or using ‘Grade
Mark’ – not supported at NTU) which are collected over time from students’ assessments and are then used to provide future
cohorts of students with feedback. This can either be within individual or generic feedback or as the focus of an in-class
discussion. Comment banks can be a useful tool for feeding back on common mistakes especially for first years, but should
not be relied upon as a sole means of giving feedback as they are unlikely to provide enough specific or personalised detail
to motivate students to thoroughly engage (CADQ, 2011). The HEA Subject Centre for Information and Computer Sciences has an
online pilot statement bank.
Provision of anticipative feedback prior to submission. This might include feedback in the form of model answers and discussions of likely misunderstandings / knowledge gaps. The
aims of this are to increase the capacity of students to learn through the assessment and reduce the amount of general feedback
needed following submission, leaving time to provide individual feedback (Brown, Race and Smith, 1997).
Use of staggered hand-in dates can encourage steadier staff and student workloads. One example of this from the School of Social Work at UCLAN is that students
submit a dissertation plan worth 10% of the mark. This ensures that students engage in the dissertation process early on and
receive feedback on their work-in-progress (Delli-Colli, n.d., p. 6). A further example is the use of mock exams. A lecturer
at Lancashire Law School at UCLAN sets students one mock exam question to do prior to their formal exams which is sat under
full exam conditions and is worth 10% of the module mark. This spreads the staff workload and encourages students to prepare
effectively and in good time for the formal exams (Delli-Colli, n.d., p. 7).
Clearer forward and sideways linkages of assessment. Ideally this should be considered at programme level. Modularisation may mean that one piece of assessment does not necessarily
feed into another unless the programme has been designed that way. Hence it is difficult for staff to provide feed forward,
difficult for students to take forward comments into their next assessments and not motivating for them to do so. Time allocated
to discussion of feedback and assessment within the NTU group tutorial system may enable students to learn more effectively
through feedback and make connections between the various assessments in different modules. Jackie Hardy from the School of
Social Sciences has developed a NOW-based profile which allows staff to see how all students are progressing by a colour-coded
entry, along with the feedback for each assignment. It also allows students to see all their marks and feedback across the
programme.
You may also be interested in:
References BROWN, S., RACE, P., and SMITH, B., 1997. 500 tips on assessment. London: Kogan Page Ltd.
CADQ (2011). Evaluation of School e-submissions and eMarking pilots, Internal NTU report.
DELLI-COLLI, S., no date. UCLan Good Practice Guide to Feedback. [Accessed 26 October 2011].
GIBBS, G., 2010. Revised assessment patters that fail, and that work. [Accessed 12 October].
GIBBS, G., and DUNBAR-GODDET, H ., 2007. The effects of programme assessment environments on student learning. York: Higher
Education Academy.
RUST, C., 2005. Developing a variety of assessment methods. In: Quality Assurance Agency for Higher Education, Reflections
on Assessment Vol. 1. Mansfield: Quality Assurance Agency for Higher Education, Enhancement Themes Publications, 2005, pp. 179-186. [Accessed 7 November 2011].
UNIVERSITY OF PLYMOUTH, 2009. Good practice in assessing students. [Accessed 25 October 2011].
|