Should Items and Answer Keys of Small-Scale Exams Be Published?


  •  Hüseyin Selvi    

Abstract

This study aimed to examine the effect of using items from previous exams on students’ pass-fail rates and on the psychometric properties of the tests and items.

The study included data from 115 tests and 11,500 items used in the midterm and final exams of 3,910 students in the preclinical term at the Faculty of Medicine from 2014 to 2019. Data were analyzed using descriptive statistics related to the total test scores, item difficulty and item discrimination values, and internal consistency values for reliability. The Shapiro-Wilks test was used to evaluate the distribution structure, and t test were used to analyze the differences between groups.

The findings showed that the mean item repetition rate from 2014 to 2019 ranged from 16.98% to 39.00%. The total score variance decreased significantly as the percentage of test items increased. There was a significant, moderately positive relationship between the percentage of repeated test items and the number of students eligible to pass their grades. Item difficulty values obtained from initial item use were significantly lower than those obtained from repeated item use.

We conclude that test items and answer keys should not be published by test makers unless they have the means such as the infrastructure, budget, and personnel to develop new items in place of the ones previously published in test banks.



This work is licensed under a Creative Commons Attribution 4.0 License.
  • ISSN(Print): 1925-4741
  • ISSN(Online): 1925-475X
  • Started: 2011
  • Frequency: quarterly

Journal Metrics

h-index (January 2020): 32

i10-index (January 2020): 125

h5-index (January 2020): 16

h5-median(January 2020): 25

(The data was calculated based on Google Scholar Citations. Click Here to Learn More. )

Contact