2017 International AALA Pre-Conference Workshop Speakers
(listed in alphabetical order by the speaker's last name)
Professor Alister Cumming
Alister Cumming is professor emeritus in the Centre for Educational Research on Language and Literacies (formerly the Modern Language Centre) at the Ontario Institute for Studies in Education, University of Toronto, where he has been employed since 1991 following briefer periods at the University of British Columbia, McGill University, Carleton University, and Concordia University. For 2014 to 2017 Alister is also a Changjiang Scholar in the National Research Centre for Foreign Language Education at Beijing Foreign Studies University. His research and teaching focus on writing in second languages, language assessment, language program evaluation and policies, and research methods
Title: Verbal Reports on Writing Assessments
Abstract: The purpose of this workshop is to introduce participants to, and to evaluate, methods of verbal reporting for illuminating, understanding, and validating the constructs of writing assessed in the contexts of classroom instruction as well as large-scale standardized tests. Approaches will be demonstrated and evaluated for their merits, limitations, and prevailing issues in (a) collecting data (participant orientation, concurrent think-alouds, summary recalls, stimulated recalls, interviews, focus groups, equivalency of conditions, natural vs. experimental contexts and motivations), (b) analyzing data (segmentation, coding, frequency counts, focused probes, group differences, triangulation with complementary data sources), (c) pedagogy (teacher demonstration, student orientation, task analysis, self- or peer-evaluation), and (d) interpretation (for exploratory or confirmatory purposes, reducing reactivity, acknowledging partial representations of cognitive and writing processes, member checks).
Professor Antony John Kunnan
Antony John Kunnan is a Professor of Applied Linguistics in the Department of English and Associate Dean of the Faculty of Arts and Humanities at the University of Macau. Previously, he held academic and professorial positions in Bangalore, Los Angeles, Singapore, and Hong Kong. He was also Fulbright Professor in Tunghai University, Taichung, Taiwan. His research interests are topics related to language assessment, research methods, and ethics. His forthcoming book is titled Evaluation of language assessments to be published by Routledge in 2017. He was former President of the International Language Association and founding President of the Asian Association for Language Association.
Title: Learning about Assessment Knowledge through Hypothetical Scenarios
Abstract: Language assessment literacy papers and workshops have focused on how to make participants understand concepts written as standards. One popular set of standards are the ones written by psychological and educational experts and best practices in the field (APA, AERA, NCME, Standards for educational and psychological testing, 1999, and 2014). The standards include validity, reliability, generalizability, fairness, norm and criterion-referenced assessment, etc. These standards have provided assessment institutions guidance for their own internal evaluations and research agendas. But, students and young professionals are expected to understand these concepts without guiding principles. More recently, the argument-based approach based on Toulmin's model (example, Bachman and Palmer, 2010) has offered a systematic approach to evaluation with an examination of an assessment institution's claims and warrants, and the backing for warrants. But, neither does this approach offer guiding principles. Thus, these two top-down approaches are unlikely to be able to help participants understand key assessment knowledge.
In order to remedy this situation, I propose a series of reflections on hypothetical scenarios that can lead us to critical guiding principles. These guiding principles can further take us to the development of claims and evidence for support. In this approach, first, a series of scenarios called "The Trolley Problem" (Foot, 1967) from moral philosophy will provide an introduction as to how to evaluate these scenarios – whether to morally justify actions based on the principle of outcomes/consequentialism or to use the principle of duty/obligation. Second, six scenarios from language assessment on defective tasks, biased tasks, scoring problems, selecting an assessment, differential pricing, and decision-making will be analyzed by applying outcomes-based or duty-based thinking. As these scenarios mirror the common assessment development to assessment decision-making process, applying principles to these scenarios will be quite transparent. Third, participants will formalize their understanding by checking a list of concepts/standards that include different aspects of validity, reliability, and fairness. Therefore, this approach will help participants understand key assessment knowledge through a bottom-up approach.
Professor Yasuyo Sawaki
Yasuyo Sawaki is a Professor of Applied Linguistics at the School of Education, Waseda University in Tokyo, Japan. Sawaki earned her PhD degree in applied linguistics with a specialization in language assessment from the University of California, Los Angeles (UCLA) in 2003. Upon graduation, she took a position as a research scientist at Educational Testing Service in New Jersey, in the U.S. In 2009 she returned to Japan and joined the Waseda University faculty. Currently, she teaches various courses including language assessment, academic writing, and teacher education. Sawaki is interested in a variety of research topics in language assessment ranging from the validation of large-scale international English language assessments to the role of assessment in classroom English language instruction. Sawaki is currently an Executive Board member of the Japan Language Testing Association (JLTA), Secretary and Treasurer of the Asian Association for Language Assessment (AALA), and a member of the Editorial Advisory Board of the Language Assessment Quarterly journal (Routledge).
Title: An Introduction to Generalizability Theory for Analyzing Language Assessment Data
Abstract: Generalizability theory (G theory; Brennan, 2001; Cronbach, Gleser, Nanda & Rajaratnam, 1972; Shavleson & Webb, 1991) is a statistical analysis technique that can be used to examine the consistency of information obtained from assessment instruments for both norm-referenced and criterion-referenced assessment score interpretations. A strength of G theory is its flexibility, which allows simultaneous modeling of multiple sources of measurement error that contribute to score variability. G theory can be used for analyzing the reliability of an existing assessment instrument as well as exploring an optimal measurement design of a new instrument. The purpose of this workshop is to provide participants with an overview of key concepts of univariate and multivariate G theory and a step-by-step introduction to conducting a G-theory analysis of language assessment data by using the computer program, mGENOVA (Brennan, 1999). No previous experience or knowledge of G theory is required to attend this workshop, while participants are expected to have some familiarity with classical test theory.