Investigating the Validity of a University-Level ESL Speaking Placement Test via Mixed Methods Research


  •  Becky H. Huang    
  •  Mingxia Zhi    
  •  Yangting Wang    

Abstract

The current study investigated the validity of a locally-developed university-level English as a Second Language (ESL) speaking placement test using a mixed-methods design. We adapted Messick’s integrative view of validity (1996) and Kane’s interpretation argument framework (2013) and focused on two sources of validity evidence: relations to other variables, and consequences of testing (AERA, APA, and NCME, 2014). We collected survey data from 41 student examinees and eight teacher examiners, and we also interviewed the teacher examiners about their perceived validity of the test. Results from the study provided positive evidence for the validity of the speaking test. There were significant associations between student examinees’ speaking test scores, their self-ratings of speaking skills, and their instructors’ end-of-semester ratings of student examinees’ English language proficiency. Both the examinees and examiners also perceived the format and questions to be appropriate and effective. However, the results also revealed some potential issues with the clarity of the rubric and the lack of training for test administration and scoring. These results highlighted the importance of norming and calibration in scoring for the speaking test and entailed practical implications for university-level ESL placement tests.



This work is licensed under a Creative Commons Attribution 4.0 License.
  • ISSN(Print): 1923-869X
  • ISSN(Online): 1923-8703
  • Started: 2011
  • Frequency: bimonthly

Journal Metrics

Google-based Impact Factor (2021): 1.43

h-index (July 2022): 45

i10-index (July 2022): 283

h5-index (2017-2021): 25

h5-median (2017-2021): 37

Learn more

Contact