Inter-rater Reliability and Agreement of Rubrics for Assessment of Scientific Writing

Education, Vol. 4 No. 1, 2014, pp. 12-17. doi: 10.5923/j.edu.20140401.03

Authors

  • Eva Ekvall Hansson
  • Peter J. Svensson
  • Eva Lena Strandberg
  • Margareta Troein
  • Anders Beckman

Keywords:

Scoring rubrics, Inter-rater reliability, Inter-rater agreement, Master’s theses

Abstract

Background:Learning how to write a scientific paper is a mandatory part of medical education at many universities. The criteria for passing the exam are not always clear; the grading guidelines are often sparse and sometimes poorly defined. Therefore, the use of rubrics can be appropriate. Purpose:The aim of this study was to test inter-rater reliability and to test agreement for the modified rubrics for the assessment of master’s theses in medical education at a Swedish university. Method:Modified scoring rubrics were used for grading and assessment of the master’s thesis at the medical programme at Lund University. The rubrics include 10 items, graded from 1 to 4. To study the inter-rater reliability and agreement of the rubrics, three teachers included in the management of the course used the rubrics and assessed all projects. ResultsA total of 37 projects were read by the three raters. Intraclass correlation for the total score was 0.76 (CI 0.59–0.87). Absolute agreement (average) for pass or fail was 90%.Conclusion:In this study, scoring rubrics for assessing master’s theses in medical education showed strong inter-rater reliability and high inter-rater agreement for pass/fail. The rubrics are now available on the university website.

Downloads

Published

2014-01-01