Problematizing Rating Scales in EFL Academic Writing Assessment: Voices from Iranian Context

Batoul Ghanbari, Hossein Barati, Ahmad Moinzadeh

Abstract


Along with a more humanitarian movement in language testing, accountability to contextual variables in the design and development of any assessment enterprise is emphasized. However, when it comes to writing assessment, it is found that multiplicity of rating scales developed to fit diverse contexts is mainly headed by well-known native testing agencies. In fact, it seems that EFL/ESL assessment contexts are receptively influenced by the symbolic authority of native assessment circles. Hence, investigating the actualities of rating practice in EFL/ESL contexts would provide a realistic view of the way assessment is conceptualized and practiced. To investigate the issue, present study launched a wide-scale survey in the Iranian EFL writing assessment context. Results of a questionnaire and subsequent interviews with Iranian EFL composition raters revealed that rating scale in its common sense does not exist. In fact, raters relied on their own internalized criteria developed through their long years of practice. Therefore, native speaker legitimacy in the design and development of scales for the EFL context is challenged and the local agency in the design and development of rating scales is emphasized.


Full Text: PDF DOI: 10.5539/elt.v5n8p76

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.

English Language Teaching       ISSN 1916-4742 (Print)   ISSN  1916-4750 (Online)

Copyright © Canadian Center of Science and Education

To make sure that you can receive messages from us, please add the 'ccsenet.org' domain to your e-mail 'safe list'. If you do not receive e-mail in your 'inbox', check your 'bulk mail' or 'junk mail' folders.