Investigating Content and Face Validity of English Language Placement Test Designed by Colleges of Applied Sciences (original) (raw)
Related papers
Developing an Academic English Program Placement Test: A Pilot Study
Placement tests play an important part in language programs by matching students to instruction at an appropriate level, but optimally should be developed to closely reflect the objectives and students of specific institutions. When such targeting is not possible, general tests of lexico-grammar can be used as generic placement measures, but matching of test difficulty and person ability is still necessary to ensure that measurement is precise enough to statistically separate groups of students. A fifty-item multiple-choice cloze test was developed and piloted, then analyzed with Rasch measurement to investigate its suitability as a placement instrument for the Fukuoka Women's University Academic English Program. The preliminary analysis indicates that the test will adequately separate high and low groups of students and identify students requiring remediation, but it is recommended that listening, reading, and vocabulary synonymy items also be developed for a longer test battery in order to better evaluate higher proficiency students and provide improved diagnostic feedback.
Developing a self-assessment of English use as a tool to validate the English Placement Test
This study aimed to develop and use a contextualized self - assessment of English proficiency as a tool to validate an English Placement Test (MEPT) at a large Midwestern university in the U.S. More specifically, the self-assessment tool was expected to prov ide evidence for the extrapolation inference within an argument-based validity framework. 217 English as a second language (ESL) students participated in this study in the 2014 spring semester and 181 of them provided valid responses to the self-assessment. The results of a Rasch model-based item analysis indicated that the self-assessment items exhibited acceptable reliabilities and good item discrimination. There were no misfitting items in the self-assessment and the Likert scale used in the self - assessm ent functioned well. The results from confirmatory factor analysis indicated that a hypothesized correlated four-factor model fitted the self-assessment data. However, the multitrait-multimethod analyses revealed weak to moderate correlation coefficients b etween participants’ self - assessment and their performances on both the MEPT and the TOEFL iBT. Possible factors contributing to this relationship were discussed. Nonetheless, given the acceptable psychometric quality and a clear factor structure of the self-assessment, this could be a promising tool in providing evidence for the extrapolation inference of the placement test score interpretation and use.
International Journal of Language Testing, 2023
Language placement tests (LPTs) are used to assess students' proficiency in a progressive manner in the target language. Based on their performance, students are assigned to stepped language courses. These tests are usually considered low stakes because they do not have significant consequences in students' lives, which is perhaps the reason why studies conducted with LPTs are scarce. Nevertheless, tests should be regularly examined, and statistical analysis should be conducted to assess their functioning, particularly when they have a medium or highstakes impact. In the case of LPTs administered on a large-scale, the logistic and administrative consequences of an ill-defined test may lead to an economic burden and unnecessary use of human resources which can also affect students negatively. This study was undertaken at one of the largest public institutions in Latin America. Nearly 1700 students sit an English LPT every academic semester. A diagnostic statistical analysis revealed a need for revision. To retrofit the test, a new test architecture and blueprints were designed in adherence to the new curriculum created following the Common European Framework of Reference for Languages. After the institution gave two courses to language instructors in language assessment, new items were developed and tried out gradually in several pilot studies conducted with a sample of actual examinees. Then, Item Response Theory (IRT) was used to examine the functioning of the new test items. The aim of this study is to show how the test was retrofitted, and to compare the functioning of the retrofitted version of the English LPT with the previous one. The results show that the quality of items was higher than that of the former English LPT. This study has implications for the design of language tests administered large-scale in higher education, particularly in (semi) periphery countries that decide to design and administer their own LPTs.
Use of a single, standardised instrument to make high-stakes decisions about test- takers is pervasive in higher education. Contrary to longstanding best practices encouraged by researchers, professional organizations, test publishers, and many accrediting bodies, however, few, if any such institutions have endeavoured to meaningfully validate the instrument(s) they use for their specific context and purposes. The current study attempted to address this void by developing and applying an argument-based validation framework for two widely adopted placement assessment methods – a standardised placement test (Accuplacer), and a locally developed and marked writing sample –utilised by a 2-year higher education institution in the Pacific. A hybrid of two validation structures – Kane’s interpretive model and Bachman’s assessment use argument – was implemented in order to assure a balanced focus on both test score interpretation and test utilization. Various types and sources of evidence informed the study, including instrument outcomes, student course results, institutional practices and policies, test publisher data, and the opinions of stakeholders gathered via focus group interview and questionnaires. Results are argued to provide insights regarding a number of current issues in the literature, including: i) debates regarding the relative strengths and weaknesses of standardised tests and locally developed and marked writing samples for informing placement decisions; ii) the value of locally conducted validation efforts to evaluate the performance and impact of an institution’s chosen assessment instruments, and identifying opportunities for improvement; and iii) the need for further argument-based validation studies, particularly those which attend to both test score interpretation and the long- neglected area of test utilization, to be carried out wherever assessments are used to make decisions which impact stakeholders.
Wistner, B., Sakai, H., & Abe, M. (2009). Bulletin of the Faculty of Letters, Hosei University, 58, 33-44.
Language tests vary in the purpose, functions, and characteristics of tests. The categories of language tests, for example, involve the following types: proficiency, placement, achievement, and diagnostic tests (Alderson, Clapham, & Wall, ; Brown, ) and progress tests (Alderson, Clapham, & Wall, ). This study focuses on one type of language test, proficiency tests.
Journal of Ismailia Faculty of Education, Suez Canal University, 2(4), 81-107., 2003
Tests developed by newly appointed teachers of English at the preparatory stage: analysis and suggestions for improvement. By Dr. Ayman Sabry Daif-Allah ( Prof. of English Education at Suez Canal University) & Dr. Mohamed Ismail Abu-Rahmah (Professor of English Education at Suez Canal University, Egypt). (Abstract): This study investigated the validity of the process of student achievement test development in English as a foreign language in the Egyptian context and its impact on test performance. Data was gathered using task-based self-assessment of classroom assessment knowledge and skills, thinking aloud protocol, and assessment training needs questionnaire. Besides, thirty language quizzes and assessment activities were analyzed in the light of a language test evaluation scale. Results provide some external support for the assumption that student achievement test development in English as a foreign language in the cases analyzed was invalid and this consequently affects test performance.
2008
This study aims (a) to find out the students’ and the instructors’ perceptions of the Compulsory English Language Course exams used to assess language performance at Çanakkale Onsekiz Mart University (COMU). It further aims (b) to determine what other objective test techniques can also be used in these exams in addition to the multiple-choice test technique by taking all the students’ and the instructors’ opinions into consideration. Quantitative research methodology was used in this descriptive study. In the light of the literature; in order to achieve the aims stated above, two questionnaires were designed by the researcher and administered to 367 students and 33 instructors. After analyzing the internal consistency of the items in the questionnaires, the researcher found acceptable Alpha reliability values both for the students’ questionnaire and for the instructors’ questionnaire. Data from the students and instructors were collected by using these questionnaires. Instructors’ questionnaire was administered to the instructors who had worked or were still working as the instructors of ‘Compulsory English Language Course’ at COMU. The students who involved in the study were all in their second years at the university and they all had the “Compulsory English Language Course” the year before the study was conducted. The data obtained through the questionnaires were analyzed via Descriptive Statistics, One-way ANOVA, Independent Samples T-Test, Cronbach Alpha Reliability Test and Nonparametric Kruskal-Wallis Test by using SPSS (Statistical Package for Social Sciences) 13.0 for Windows. The findings of the descriptive statistics showed that students expect the instructors to attach more importance to the activities improving their speaking, listening and writing skills. Furthermore, the results displayed that nearly 73 percent of the instructors prefer the exams to be prepared by a testing office while more than half of the students expect them to be prepared by the instructor of the course. The results also revealed that both the students and the instructors believed it was necessary to use other test techniques in addition to the multiple-choice test technique commonly used in the exams. According to the results of the One-way ANOVA, the more successful the students are, the more satisfied they are with the exams’ different characteristics. As for the instructors, Nonparametric Kruskal-Wallis Test results indicated that there occurred no significant differences between instructors’ educational background and the objective test techniques that they use in their classrooms. Additionally, it was found out there were no significant differences between instructors’ educational background and their ideas on the objective test techniques that can be used in the exams. However, the more experienced the instructors are, the more efficient they find the exams prepared by the testing office. Another important finding was that although their order of preferring objective test techniques slightly differs, the first eight test techniques that the students and instructors preferred in the exams were completely same. The study concludes that both the students and the instructors have some doubts about the efficiency of the testing office’s current practices. Therefore, for more efficient exams, test constructors can include the eight objective test techniques [(1) multiple-choice questions, (2) matching, (3) ordering tasks, (4) completion, (5) true-false questions, (6) short-answer questions, (7) error correction and (8) word changing], which were commonly preferred by the instructors and the students, into the Compulsory English Language Course Exams. In addition to the centrally administered achievement tests of this course, instructors should use teacher-made achievement tests and take the scores that students get from these tests into consideration while assessing their learners’ language performance. Moreover, having a testing office with test constructors specialized just at testing will be a good idea for preparing better and more efficient tests.
1998
Language testing, especially testing of English as the lingua franca of this era, has been an important part of education in Turkey as well as in other countries. Parallel with the development in teaching, language testing has used different testing methods, each having its advantages and disadvantages. A good test may be defined as a test that is both reliable and valid serving its aim properly. This study sought to find one of the qualities of a good test; validity of the placement test prepared and administered by the Foreign Languages Department of Osmangazi University. The hypotheses were that the placement test had some deficiencies in terms of validity, especially content and predictive validity. The writing section of the test did not seem to measure what it was intended to measure since the questions in the test did not match with the course content and objectives. The aim of this research was to describe the situation of the placement test in terms of validity. In this study, the opinions of prep school and engineering faculty students as well as prep school and engineering faculty instructors were investigated. The materials used in this study were questionnaires, interviews and twenty-six students' placement test scores and first term grades at the prep school. For prep school students, faculty students, prep school instructors and faculty professors different questionnaires and interviews were given. The researcher also examined twenty-six To analyze the data, first frequencies of questionnaires' results were determined, their percentages were calculated and finally the results were transformed to figures. Interview results were put in narrative form under each group's questionnaire results. Coefficient correlation related to predictive validity was displayed at the end of Chapter 4. According to the students and instructors, there seems to be no consistency between the writing section in the placement test and writing activities held in class. This result was expected by the researcher since, in the placement test, students are not required to write anything instead they are supposed to answer multiple choice questions such as finding relevant or irrelevant sentences in a paragraph. The results of the investigation of predictive validity shows that there is a big gap between the grades achieved in the test and first term results. The reason for this difference may Vll Pre-Intermediate Group Interview Results.
Rupkatha Journal on Interdisciplinary Studies in Humanities, 2022
APA Cite:- Sheerah, H. A. H., & Yadav, M. S. (2022). The Use of English Placement Test (EPT) in Assessing the EFL Students’ Language Proficiency Level at a Saudi University. Rupkatha Journal on Interdisciplinary Studies in Humanities, 4(3). DOI: https://doi.org/10.21659/rupkatha.v14n3.24 Purpose: In order to ascertain EFL students' characteristics (English proficiency, fluency, critical thinking, and communication), educational context, level of competence, professional goals, and pursuits for future endeavors, English Placement Tests (EPTs) are conducted in several academic contexts (Lamb, 2017; Taşpinar & Külekçi, 2018; Stehle & Peters-Burton, 2019; Alrabai, 2021; Yuan, 2022). An EPT is a standard test used to determine students' levels and abilities in English. It assesses how different their skills are in English before registering for English language courses in schools, universities, and companies. This research lends credence to the EPT's reliability and validity in determining students' course enrollment in university education. Design/methodology/approach: This study implemented a hybrid research design. At the start of the second semester in December 2021, 136 students took the placement test. A t-test was used to compare the students' pre- and post-test results in order to assess the efficacy and effectiveness of the EPT. Five instructors also took part in a semi-structured interview to discuss their thoughts, beliefs, and experiences related to their teaching-learning enhancement of English programmes at the time the EPT was completed. Findings: The EPT results show students' proficiency levels in three main areas: grammar, reading, and listening. After knowing the results of the EPT and the student's performance, the weak areas were worked on. After one semester's intervention, the test scores finally resulted positively, showing the students' improvement. Since the results were statistically positive and significant, the study strongly suggested that EPT must be conducted at the beginning of the semester at the university level. Furthermore, based on the qualitative analysis and the comments and suggestions of the instructors, the idea of having an EPT for English foreign language (EFL) first-year students who want to take English language courses at universities was also strongly favored. The study supports the EPT's validity for EFL students at college enrollment requirements according to English skills competency levels for English language courses.