Dissertation title: Development and validation of an automated essay scoring engine to assess students’ development across program levels
As English as a second language (ESL) populations in English-speaking countries continue to grow steadily, the need for methods of accounting for students’ academic success in college has become increasingly self-evident. Holistic assessment practices often lead to subjective and vague descriptions of learner language level, such as beginner, intermediate, advanced (Ellis & Larsen-Freeman, 2006). Objective measurements (e.g., the number of error-free T-units) used in second language production and proficiency research provide precise specifications of students’ development (Housen, Kuiken, & Vedder, 2012; Norris & Ortega, 2009; Wolfe-Quintero, Inagaki, & Kim, 1998); however, the process of obtaining a profile of a student’s development by using these objective measures requires many resources, especially time. In the ESL writing curriculum, class sizes are frequently expanding and instructors’ workloads are often high (Kellogg, Whiteford, & Quinlan, 2010); thus, time is at its limits, making the accountability for students’ development difficult to manage.
The purpose of this research is to develop and validate an automated essay scoring (AES) engine to address the need for resources that provide precise descriptions of students’ writing development. Development of the engine utilizes measures of complexity, accuracy, fluency, and functionality (CAFF), which are guided by Complexity Theory and Systemic Functional Linguistics. These measures were built into computer algorithms by using a hybrid approach to natural language processing (NLP), which includes the statistical parsing of student texts and rule-based feature detection. Validation follows an interpretive argument-based approach to demonstrate the adequacy and appropriateness of AES scores. Results provide a mixed set of validity evidence both for and against the use of CAFFite measures for assessing development. Findings are meaningful for continued development and expansion of the AES engine into a tool that provides individualized diagnostic feedback for theory- and data-driven teaching and learning. The results also underscore the possibilities of using computerized writing assessment for measuring, collecting, analyzing, and reporting data about learners and their contexts to understand and optimize learning and teaching.
We’re pleased to announce details of the fifth combined ALANZ-ALAA-ALTAANZ conference.
Location: City campus, Auckland University of Technology, New Zealand
Dates: 27 – 29 November 2017
“Applied Linguistics in the New Millennium: Multiple Theories, Pathways, and Practices”
The theme aims to be inclusive of the breadth of the field of applied linguistics. As applied linguists, we’re interested in language learning, language teaching, and how people use language to express their relationship to the world. We’re also interested in issues of identity, in creating corpora, in language policy, just to name a few. What unites us, however, is our interest in researching language-related problems in the real world.
A great line up of keynote presenters are confirmed for the conference.
- Marnie Holborow, Associate Faculty, School of Applied Linguistics & Intercultural Studies, Dublin City University, Ireland.
- Heidi Byrnes, Emeritus Professor, Department of German, Georgetown University, Washington, D.C, United States of America.
- Hilary Nesi, Professor in English Language, Coventry University, England.
- Chris Davison, Professor of Education, University of New South Wales, Australia.
- Jennifer Hay, Professor of Linguistics, University of Canterbury, New Zealand.
A call for abstracts will go out in early 2017.
Posted in General | Comments Off on ALANZ / ALAA / ALTAANZ Conference, Nov 2017