Principles and practice in assessing speaking: Beyond reticence to target “ear and tongue skills”
Talia Isaacs, University of Bristol, UK
This workshop will address practical considerations and principles of best practice in assessing second language (L2) speaking. The focus is on ways that the construct of spoken proficiency has been defined and measured in L2 assessment instruments, on sources of variability in L2 speaking performances that could have bearing on the score that is assigned, and on the validity of human- vs machine-mediated assessments of speech.
The workshop will begin with a historical overview of assessing speaking. Being able to engage in effective oral communication in the dominant language of a given society is often considered essential for performing well on the job, succeeding in the academic arena, accessing vital social services, and, from a more macro perspective, integrating into society or mitigating social isolation. From this perspective, it may seem intuitive that value and emphasis would have uniformly been placed on assessing L2 speaking over the past century. In fact, concerns that the “ear and tongue skills” are “less measurable because they are less tangible, more subject to variation, and probably will involve the cumbersome and time-consuming expedient of the individual oral examination” (Ludenberg, 1929, p. 195) have led to speaking assessment being marginalized relative to the assessment of other skills in some L2 settings. Comparisons of the treatment of speaking within different assessment traditions over time will serve as a launching pad for highlighting key issues in assessing speaking, including the unique challenges posed by the spoken (as opposed to the written) medium due to the transient and intangible nature of speech. Workshop activities will centre on various sources of influence on speaking test scores (e.g., test-takers’ output, speaking tasks, raters, rating scales, interlocutors) and on the role of technological innovations in allaying historical concerns about the reliability of the human scoring of speech.
Lundeberg, O. K. (1929). Recent developments in audition-speech tests. The Modern Language Journal, 14(3), 193-202.
A Practical Approach to Questionnaire Construction and Analysis for Language Assessment Research
Aek Phakiti, The University of Sydney
This workshop aims to provide an introduction to questionnaire development and questionnaire analyses for language assessment use and research purposes. In language testing and assessment, questionnaires are popular instruments for eliciting students’ psychological aspects (e.g., cognitive processes, motivation, and anxiety) as well as other aspects such as beliefs, attitudes and experiences. In this workshop, we will discuss and address:
Contexts in which questionnaires can be appropriately used
Strengths and weaknesses of questionnaires
Stages in a questionnaire development
Essential components of a questionnaire
Scales and quantification
Validity and reliability of a questionnaire
Qualitative data in questionnaires
Types of statistical analyses
Examples of questionnaires and practice
Participants in this workshop will form a small group of 4-5 people and will be provided with activities and tasks related to each of the aspects above. They may be asked to share their discussion or work to the whole class.
Phakiti, A. (2014). Questionnaire development and analysis. In A.J. Kunnan (ed), The companion to language assessment (pp. 1245-1261). London: John Wiley & Sons.