Take the following TOEFL iBT reading and Writing tests.
1. After you are done, discuss the ways in which either section accounts for the theory of multiple intelligences.
2. To what extent does computer literacy influence either section of this iBT?
Hong Kong Polytechnic University
Millions of learners of EFL/ESL take English language tests every year, and teachers often worry that learners’ attention may be distracted from the real business of learning the language and focussed on mastering item types for the tests instead (Buck, 1988; Raimes, 1990; Shohamy, 1992). The question of washback, the influence of tests onto teaching and learning, has been well discussed elsewhere (Alderson & Hamp- Lyons, 1996; Alderson & Wall, 1993; Bailey, 1996; Wall & Alderson, 1993). The discussion here focuses on the role of textbooks in test washback, raising questions about the prevalent practices of test prepara- tion and the materials used in it.
In raising these questions, I take as my example one test, the Test of English as a Foreign Language (TOEFL), produced by the Educational Testing Service (ETS).’ The TOEFL is the most commonly taken international English language test in the world. Approximately a million people take the TOEFL every year (ETS, 1996), a number exceeded only by the approximately 2 million people within China who take the College English Test (CET) each year (Cheng, 1997).
The extent to which TOEFL iBT speaking scores are associated with performance on oral language tasks and oral ability components for Japanese university students.
The purpose of this study was to determine the extent to
which performance on the TOEFL iBT speaking section is associated with other
indicators of Japanese university students’ abilities to communicate orally in
an academic English environment and to determine which components of oral
ability for these tasks are best assessed by TOEFL iBT. To achieve this aim,
TOEFL iBT speaking scores were compared to performances on a group oral
discussion, picture and graph description, and prepared oral presentation
tasks, and their component scores of pronunciation, fluency,
grammar/vocabulary, interactional competence, descriptive skill, delivery skill,
and question answering. Participants were Japanese university students (N =
222), who were English majors in a Japanese university. Pearson product–moment
correlations, corrected for attenuation, between scores on the speaking section
of TOEFL iBT and the three university tasks indicated strong relationships
between the TOEFL iBT speaking scores and the three university tasks and high
or moderate correlations between the TOEFL iBT speaking scores and the
components of oral ability. For the components of oral ability, pronunciation,
fluency, and vocabulary/grammar were highly associated with TOEFL iBT
speaking scores while interactional competence, descriptive skill, and delivery
skill were moderately associated with TOEFL iBT speaking scores. The findings
suggest that TOEFL iBT speaking scores are good overall indicators of academic
oral ability and that they are better measures of pronunciation, fluency and
vocabulary/grammar than they are of interactional competence, descriptive
skill, and presentation delivery skill.
The increasing use of computer-based testing raises concerns about
equity and bias. Specifically, many in the field of language
testing are concerned that the introduction of a computer-based TOEFL test in 1998 will
confound language proficiency with computer proficiency and thus bring
construct-irrelevant variance to the measurement of examinees’
English-language abilities.
In a Phase I study (Kirsch, Jamieson, Taylor, & Eignor,
1998), TOEFL examinees were surveyed regarding their computer familiarity and
classified into one ofthree computer familiarity groups: low,
moderate, and high. In this study, Phase II, more than 1,100
“low-computer-familiar” and “high-computer-familiar”
examinees from 12 international sites were identified from the Phase I survey
and administered a computer tutorial and a set of 60
computer-based TOEFL test items. The relationship between level of computer
familiarity and performance on the computer-based items was then examined. The
examinees in Phase II were largely representative of those
in Phase I, who were representative of the general TOEFL test-taking population. Thus,
results from this phase of the study are considered generalizable to the
current TOEFL examinee population.
The effect of computer familiarity after adjustments for
language ability was examined by performing a series of analyses of covariance
(ANCOV As), using TOEFL paper-and-pencil test score as the covariate. These
analyses were followed by a series of ANCOV As involving the computer familiarity
variable and a number of other variables: gender, reason for taking the
TOEFL test, times the TOEFL test had been taken, and location where the TOEFL
test was taken. In a final set of analyses, the TOEFL paper-and-pencil test scores of the
low- and high- computer-familiar examinees were weighted such that the groups
had identical distributions on the covariate.
After controlling for language ability, the researchers
found no meaningful relationship between level of computer
familiarity and level of performance on computerized language tasks among
TOEFL examinees who had completed the computer tutorial. This finding was
consistent for all but one o f the subgroups considered. A small but
practically significant interaction between computer familiarity and reason for
taking the test was found on the set of computerized reading items. Researchers
concluded that there was no evidence of adverse effects on the computer- based TOEFL
performance due to lack of prior computer experience.
© 2018 All rights Reserved. Ritassida Mamadou Djiguimde