<?xml version="1.0"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="editorial" dtd-version="1.0" xml:lang="en">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">IJME</journal-id>
      <journal-id journal-id-type="nlm-ta">Int J Med Educ</journal-id>
      <journal-title-group>
        <journal-title>International Journal of Medical Education</journal-title>
        <abbrev-journal-title abbrev-type="pubmed">Int J Med Educ</abbrev-journal-title>
      </journal-title-group>
      <issn pub-type="epub">2042-6372</issn>
      <publisher>
        <publisher-name>IJME</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="publisher-id">6-3839</article-id>
      <article-id pub-id-type="doi">10.5116/ijme.54e8.86df</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Editorial</subject> 
		  </subj-group>     
        <subj-group>
          <subject>standard setting</subject>  
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Making students' marks fair: standard setting, assessment items and post hoc item analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author" corresp="yes">
          <name>
            <surname>Tavakol</surname>
            <given-names>Mohsen</given-names>
          </name>
          <xref ref-type="aff" rid="aff1">
            <sup>1</sup>
          </xref>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Doody</surname>
            <given-names>Gillian A.</given-names>
          </name>
          <xref ref-type="aff" rid="aff1">
            <sup>1</sup>
          </xref>
        </contrib>
        <aff id="aff1"><label>1</label>Assessment Unit, School of Medicine, The University of Nottingham, UK</aff>
      </contrib-group>
      <author-notes>
        <corresp id="cor1">Correspondence: Mohsen Tavakol, Assessment Unit, School of Medicine, The University of Nottingham, UK. Email: <email xlink:href="mohsen.tavakol@nottingham.ac.uk">mohsen.tavakol@nottingham.ac.uk</email></corresp>
      </author-notes>
      <pub-date pub-type="epub">
        <day>28</day>
        <month>02</month>
        <year>2015</year>
      </pub-date>
      <volume>6</volume>
      <fpage>38</fpage>
      <lpage>39</lpage>
      <history>
        <date date-type="accepted">
          <day>21</day>
          <month>02</month>
          <year>2015</year>
        </date>
        <date date-type="received">
          <day>31</day>
          <month>01</month>
          <year>2015</year>
        </date>
      </history>
      <permissions>
        <copyright-statement>Copyright: &#xA9; 2015 Mohsen Tavakol et al.</copyright-statement>
        <copyright-year>2015</copyright-year>
        <license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0">
          <license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution License which permits unrestricted use of work provided the original work is properly cited. <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</ext-link></license-p>
        </license>
      </permissions>
      <kwd-group kwd-group-type="author">
        <kwd>standard setting</kwd>
        <kwd>assessment items</kwd>
        <kwd>post hoc item analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
<sec><title/>
<p>Establishing test marks and reporting them to students is a difficult task for key medical educators. Failing a test is usually an unpleasant experience for any student. Therefore, medical educators must ensure the "reasonableness"  of the pass mark and additionally, the quality of all test items. A poorly established pass mark and/or defective items may increase the number of students who fail unfairly. Furthermore, student complaints, legal actions and political issues may arise when pass marks are determined with poor procedural, internal or external validity. In any criterion-referenced assessment, the process of post hoc item judgment is essential and should be considered as a vital step in the validation of the standard setting method.</p>
<sec><title>Benefits of using post hoc item data</title>
<p>Standard setting should aim to accurately pinpoint any individual student's performance along a continuum. Therefore, pass mark setters should be aware of the ability range of their students, the upper and lower limits of the continuum, when undertaking the task of rating items during standard setting. By reference to post hoc impact data (i.e. students actual performance data), standard setters may gain greater understanding of the performance of their students when judging future items. Post hoc item analysis can also encourage discussion between standard setters and thereby helps to minimise subjective errors of standard setters when judging student ability in future.</p>
<p>It is important to remember, that when standard setters establish a reasonable pass mark, those students who have received a mark immediately on either side of the pass mark are actually very similar in terms of their performance.<xref ref-type="bibr" rid="r1"><sup>1</sup></xref> Furthermore, the reliability of the initial standard setting process also affects the accuracy of these borderline Students' marks. Therefore, an argument for a post hoc evaluation of test items to ensure that items perform as predicted during standard setting, easily develops. The most common standard setting discrepancies, seen after a test has been performed, are produced by aberrant items. Flawed assessment items may affect the classification of students i.e. pass or fail. 'If there are defective items in a test, the students should not be held accountable for them'.<xref ref-type="bibr" rid="r2"><sup>2</sup></xref> Students' raw marks are not true marks (free of error) and therefore may not accurately reflect student ability. Therefore, final marks should not be reported until after the post hoc analysis of individual item performance. Student marks should be moderated, before final results are announced.</p>
</sec><sec><title>Plausible methods to perform post hoc moderation</title>
<p>In medical education assessments, candidates are measured in various situations by various assessors, against predetermined standards, to measure performance e.g. assessor and standardised patient ratings of students in objective structured clinical examinations (OSCEs), ratings of knowledge questions in standard setting, marking of written assignments and presentations, or judgements relating to attitude and behaviour in respect to professionalism. Systematic errors (hawks and doves) and random errors (made by unreliable or inconsistent raters) can have a detrimental effect on student marks.</p>
<p>By detecting and adjusting unreliable ratings, student marks become a more robust representation of actual performance, as the reliability of ratings is significantly increased.<xref ref-type="bibr" rid="r3"><sup>3</sup></xref><sup>, </sup><xref ref-type="bibr" rid="r4"><sup>4</sup></xref> To correct the effects of systematic errors, Item Response Theory (IRT) models, Ordinary Least Squares or Weighted Least Squares may be applied.<xref ref-type="bibr" rid="r5"><sup>5</sup></xref><sup>, </sup><xref ref-type="bibr" rid="r6"><sup>6</sup></xref> The IRT analysis using a Multifaceted Rasch model or Generalisability studies provide an estimate of the effect of differences in the assigned ratings by the raters.</p>
<p>With respect to a standard setting practice, IRT models can provide useful information for standard setters by comparing the consistency of their ratings with IRT model estimates. If there is a discrepancy between standard setters' ratings and the IRT estimate, a new pass mark may be established by marking all students as having correctly responded to aberrant items. Individual item analysis can therefore contribute significantly to the moderation of student pass marks.</p>
<p>Using the IRT models, standard setters are able to identify any items that are not mapped to student ability i.e. items which are either too difficult or too easy for that cohort. By adjusting for these items, medical educators can convert Students' raw marks to moderated final marks. For example, if an item is too difficult for the cohort, with only 10% getting the correct answer that mark is given to all the cohort, including the 90% who answered incorrectly. It would be erroneous to simply remove the question as this would discriminate negatively against the 10% of students who answered correctly initially.</p>
</sec><sec><title>Standard Error of Measurement to perform post hoc item moderation</title>
<p>Estimating the standard error of measurement (SEM) provides valuable information about the errors attached to 'raw' student marks. Defective items can increase the SEM in a test, so such items can increase the number of failures unfairly as 'in most medical examinations, which are pass/fail, the only candidates who will be affected by error within the exam are those around the pass mark'.<xref ref-type="bibr" rid="r7"><sup>7</sup></xref> By calculating the absolute error variance of a test using Generalisability theory,<xref ref-type="bibr" rid="r8"><sup>8</sup></xref> its square root equals the absolute SEM, we are now in a position to create a range pass mark for a test without changing the standard setting method. This is especially important if sequential testing is being used to determine which students require further testing to demonstrate competency.</p>
<p>Taken together, students are sensitive to their marks, and they can sense whether or not their marks are fair. Undertaking a moderation process, prior to reporting final marks, will make student marks fair, and increase student satisfaction with the process of the exam cycle.</p>
</sec></sec>
  </body>
  <back>
    <ref-list><title>References</title>
<ref id="r1"><label>1</label><mixed-citation publication-type="other">Shepard L. Standards for placement and certification. In: Anderson S, Helmik J, editors. On educational testing. London: Jossey-Bass Publishers; 1983.</mixed-citation></ref>
<ref id="r2"><label>2</label><mixed-citation publication-type="other">McDonald M. Guide to assessing learning outcomes. New York: Jones &amp; Bartlett Learning; 2014.</mixed-citation></ref>
<ref id="r3"><label>3</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Houston</surname><given-names>WM</given-names></name><name><surname>Raymond</surname><given-names>MR</given-names></name><name><surname>Svec</surname><given-names>JC</given-names></name></person-group><article-title>Adjustments for Rater Effects in Performance Assessment.</article-title><source>Applied Psychological Measurement</source><year>1991</year><volume>15</volume><fpage>409</fpage><lpage>421</lpage><pub-id pub-id-type="doi">10.1177/014662169101500411</pub-id></element-citation></ref>
<ref id="r4"><label>4</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Raymond</surname><given-names>MR</given-names></name><name><surname>Harik</surname><given-names>P</given-names></name><name><surname>Clauser</surname><given-names>BE</given-names></name></person-group><article-title>The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors of Performance Ratings.</article-title><source>Applied Psychological Measurement</source><year>2011</year><volume>35</volume><fpage>235</fpage><lpage>246</lpage><pub-id pub-id-type="doi">10.1177/0146621610390675</pub-id></element-citation></ref>
<ref id="r5"><label>5</label><mixed-citation publication-type="other">Raymond  M, Viwesvaran C. Least-squares models to correct for rater effects in perfomance assessment. Iowa: The American College Testing Program; 1991.</mixed-citation></ref>
<ref id="r6"><label>6</label><mixed-citation publication-type="other">Harasym P, Woloschuk W, Cunning L. Undesired variance due to examiner stringency/leniency effect in communication skill scores assessed in OSCEs. Adv Health Sci Educ Theory Pract. 2008;13:617-32.</mixed-citation></ref>
<ref id="r7"><label>7</label><mixed-citation publication-type="other">General Medical Council. Reliability issues in the assessment of small cohorts. London: General Medical Council; 2010.</mixed-citation></ref>
<ref id="r8"><label>8</label><mixed-citation publication-type="other">Brennan R. Generalizability theory. New York: Springer-Verlag; 2001.</mixed-citation></ref>
</ref-list>
  </back>
</article>
