Examining the Yes/No vocabulary test: some methodological issues in theory and practice (original) (raw)

Pellicer-Sánchez, A. & Schmitt, N. (2012). Scoring Yes-No vocabulary tests: Reaction time vs. nonword approaches. Language Testing.

Language Testing, 2012

Despite a number of research studies investigating the Yes-No vocabulary test format, one main question remains unanswered: What is the best scoring procedure to adjust for testee overestimation of vocabulary knowledge? Different scoring methodologies have been proposed based on the inclusion and selection of nonwords in the test. However, there is currently no consensus on the best adjustment procedure using these nonwords. Two studies were conducted to examine a new methodology for scoring Yes-No tests based on testees' response times (RTs) to the words in the test, on the assumption that faster responses would be more certain and accurate whereas more hesitant and inaccurate ones would be reflected in slower RTs. Participants performed a timed Yes-No test and were then interviewed to ascertain their actual vocabulary knowledge. Study 1 explored the viability of this approach and Study 2 examined whether the RT approach presented any advantage over the more traditional nonword approaches. Results showed that there was no clear advantage for any of the approaches under comparison, but their effectiveness depended on factors like the false alarm rate and the size of participants' overestimation of their lexical knowledge.

Vocabulary: What should we test?

In Sonda & A. Krause, (Eds.); JALT 2012 Conference Proceedings. Tokyo: JALT, 2013

Diagnostic Yes/No tests are a recommended and much researched assessment tool (Read, 2007; Nation, 2008), yet there is little research into how to apply them to address the mismatch between preexisting course vocabulary lists from commercial textbooks for a particular level and learners’ actual vocabulary knowledge. This study looked at a vocabulary battery of 240 words adopted with a textbook for a pre-intermediate level English course at a Japanese university. During the 1st week of instruction, a Yes/No test including nonwords (pseudo-words) was administered in three forms with 85 items each. Approximately 100 students took each form. On the average, test takers claimed they knew 75% of the items on the list. A low false alarm rate supports Shillaw’s (1996) findings that the use of nonwords could be lessened significantly in the Japanese context.

Optimizing Scoring Formulas for Yes/No Vocabulary Tests with Linear Models

Shiken Research Bulletine, 2012

"Yes/No tests offer an expedient method of testing learners’ vocabulary knowledge, although a drawback of this approach is that since the manner is self-report, actual knowledge cannot be confirmed. “Pseudowords” have been used within such lists to test if learners are reporting knowledge of words they cannot possibly know, but it is unclear how to use this information to adjust scores. Although a variety of scoring formulas have been proposed in the literature, empirical research (e.g., Mochida & Harrington, 2006) has found little evidence of their efficacy. The authors propose that a standard least squares model in which the counts of words and pseudowords reported known are added as separate predictor variables can be used to generate scoring formulas that have substantially higher predictive power, particularly if items used are appropriately screened and selected. This is demonstrated on pilot data, and limitations of the method and goals of future research are discussed. Keywords: Yes/no vocabulary tests; pseudowords; receptive vocabulary knowledge "

New directions in vocabulary testing

2013

There have been great strides made in research on vocabulary in the last 30 years. However, there has been relatively little progress in the development of new vocabulary tests. This may be due in some degree to the impressive contributions made by tests such as the Vocabulary Levels Test (Nation, 1983; and the Word Associates Test . In this report, an argument is made that there is a need for the development of new vocabulary tests. The justification for the development of new tests will be discussed and four new tests that are in different stages of development will be briefly introduced. The first two expand on the contributions of the Vocabulary Levels Test. One is a new version of the Vocabulary Levels Test and the other measures knowledge of the different sublists of Coxhead's (2000) Academic Word List. The second two tests measure a different aspect of vocabulary knowledge, vocabulary learning proficiency. The Guessing from Context Test was designed to measure the ability to guess words in context and the Word Part Levels Test measures knowledge of affixes.

The updated Vocabulary Levels Test

ITL - International Journal of Applied Linguistics, 2017

The Vocabulary Levels Test (Nation, 1983; Schmitt, Schmitt, & Clapham, 2001) indicates the word frequency level that should be used to select words for learning. The present study involves the development and validation of two new forms of the test. The new forms consist of five levels measuring knowledge of vocabulary at the 1000, 2000, 3000, 4000, and 5000 levels. Items for the tests were sourced from Nation’s (2012) BNC/COCA word lists. The research involved first identifying quality items using the data from 1,463 test takers to create two equivalent forms, and then evaluating the forms with the data from a further 250 test takers. This study also makes an initial attempt to validate the new forms using Messick’s (1989, 1995) validity framework.

Developing and exploring the behaviour of two new versions of the Vocabulary Levels Test

Language Testing, 2001

The Vocabulary Levels Test has been widely used in language assessment and vocabulary research despite never having been properly validated. This article reports on a study which uses a range of analysis techniques to present validity evidence, and to explore the equivalence of two revised and expanded versions of the Vocabulary Levels Test.