Abstract:
To investigate the trait structure of oral language ability in a computer-based speaking test (CBST), the test-taking processes and strategies of the examinees taking the test, the effects of test tasks on test scores and speech produced, and the attitudes of the examinees to the test. The participants were Thai first-year university students. The research instruments included the CBST, rating scales, and questionnaire. The traits being investigated were knowledge of pronunciation, syntax, vocabulary, cohesion and function. The test tasks comprised the narrative, opinion, imaginary and persuasive tasks which generated planned monologic responses. The data were analyzed through both quantitative and qualitative approaches. The quantitative approaches were multivariate generalizability theory, confirmatory factor analysis, and MANOVAs. The qualitative approaches consisted of content analysis of verbal protocols, and attitudes to the test, and discourse analysis of the test responses: analyses of genre, speech functions and grammatical features. The limitations of the study concerned the homogeneous nature of the participants in terms of their native language and educational level, the purposive sampling method used in selecting the participants as well as the relatively small sample size. The findings seemed to provide evidence which supported and challenged the validity of the CBST score interpretations. The supportive evidence may be, first, most of the trait factor loadings were significant (p < .05) and ranged from moderate to high. Also, the reliability of most constructs was above 0.70. This suggests that most measures were good indicators of the construct they were designed to measure, indicating the CBST was a valid measure of most of the speaking ability as defined. Second, the examinees engaged in test-taking processes and strategies relevant to the constructs of interest. This implies the test scores may be a product of using the language ability aimed to be measured by the test. Moreover, the genre, speech functions and grammatical features found in the test responses corresponded with the task requirements. Finally, the examinees generally had positive attitudes to the test. However, evidence that may threaten the validity of the score interpretation came from the low reliability of the functional knowledge construct. This may be the result of the imaginary task as the difficulty and ambiguity of the task prompt has led to the lowest mean score in almost all areas, unintended speech functions and negative attitude to the test. Also, the lack of interlocutors caused the examinees to view the test as unauthentic. Despite these weak points, with some revisions and careful interpretations made from the test scores, the CBST can serve as a potentially useful instrument in oral language assessment.