An investigation of a nationwide exam from a critical language testing perspective ABOUT THE AUTHORS (original) (raw)

An investigation of a nationwide exam from a critical language testing perspective

Cogent Social Sciences

The present study aims at investigating two main test parties' viewpoints-those of university teachers and TEFL MA students-regarding a nationwide Exam to find out their opinions about using the current method of the TEFL MA University Entrance Exam, those aspects of the test which can be improved and the amount of power or control that these two main test parties own at different phases of the Exam.The survey also examined the washback effect of this exam on university teachers'methodologies and instructions. To this end seven university professors and sixty TEFL students who had passed the exam were selected using a convenience random sampling. Subsequently, a validated researcher-made questionnaire was administered to the students. In order to collect more reliable data, some students were randomly selected to be interviewed so as to cross-check the data collected through the questionnaire. The results of this study indicated that all the university professors and the majority of the students demanded to have control over the content, the time of the administration and other issues related to the TEFL MA Entrance Exam. They claimed that they had no power in the test development and administration processes. They also believed that the test should serve as an indicator of language ability or knowledge, rather than test-taking skills. In addition, the results indicated that the exam had a negligible effect on professors' academic

Needs of Higher Education students as regards language examinations

Culturas, Identidades e Litero-Línguas Estrangeiras; atas do I Colóquio Internacional de Línguas Estrangeiras, 2018

Establishing the goals and needs of potential candidates is of the utmost importance when designing a Proficiency exam for language certification purposes. Having a clearly defined testing goal will help us better define our construct and thus allow for a solid validity argument. In our case, as a Language Centre within a higher education institution, our goal when designing such a test is to meet the demands of the university and its members in the European Higher Education Area. Keywords: higher education, students’ needs, language examination. Our study starts by analysing our target population, for which we needed to define the profile of our candidates and determine the sample that was going to be used for research purposes. We decided to use students from our language centre as research population since it was large enough to give us a representative sample and it enabled us to use computer tools that would allow for the automatic processing of the data. This paper presents the results of our research which have allowed us to define the construct of our test based on the specific needs of our environment.

A Content Analysis of the TEFL M.A. Entrance Examinations (Case Study: Majors Courses)

Journal of Pan Pacific Association of Applied Linguistics, 2010

The MA Entrance Examinations (MAEE) held in Iran since 1990 are frequently criticized as being invalid, unstandardized exams with lots of problem in terms of principles of testing in general and test construction in particular (for instance, Jafarpur, 1996). To make sound judgments about such objections, the present study dealt with a content analysis of the TEFL MAEE held in 2007. Actually, the purpose of this study was two-fold. First, it aimed at analyzing the content of the MAEE’s in order to see if any pattern be at work in the process of devising such exams. Naturally, through such an analysis, some problems of these exams were also determined. Thus, the second aim of the present study was to pinpoint and describe the problems with these exams as well as to offer some suggestions to remedy the problems. In so doing, a coding system encompassing a checklist of the possible content categories was developed by the researchers themselves and its validity and reliability were established as well. The finding of such an analysis, especially the unequal distribution of the content categories, supported the idea that the validity of the exam is not strongly established due to the exclusion of or de-emphasis over the content categories given significant credit in the B.A. Program. The problems found during the analysis showed that the exam is not a standard one; still some of the basic principles of language testing are not observed in the process of constructing the exam.

STUDENTS AND TEACHERS' PERCEPTIONS OF THE TEST TECHNIQUES USED TO ASSESS LANGUAGE SKILLS AT UNIVERSITY LEVEL

2008

This study aims (a) to find out the students’ and the instructors’ perceptions of the Compulsory English Language Course exams used to assess language performance at Çanakkale Onsekiz Mart University (COMU). It further aims (b) to determine what other objective test techniques can also be used in these exams in addition to the multiple-choice test technique by taking all the students’ and the instructors’ opinions into consideration. Quantitative research methodology was used in this descriptive study. In the light of the literature; in order to achieve the aims stated above, two questionnaires were designed by the researcher and administered to 367 students and 33 instructors. After analyzing the internal consistency of the items in the questionnaires, the researcher found acceptable Alpha reliability values both for the students’ questionnaire and for the instructors’ questionnaire. Data from the students and instructors were collected by using these questionnaires. Instructors’ questionnaire was administered to the instructors who had worked or were still working as the instructors of ‘Compulsory English Language Course’ at COMU. The students who involved in the study were all in their second years at the university and they all had the “Compulsory English Language Course” the year before the study was conducted. The data obtained through the questionnaires were analyzed via Descriptive Statistics, One-way ANOVA, Independent Samples T-Test, Cronbach Alpha Reliability Test and Nonparametric Kruskal-Wallis Test by using SPSS (Statistical Package for Social Sciences) 13.0 for Windows. The findings of the descriptive statistics showed that students expect the instructors to attach more importance to the activities improving their speaking, listening and writing skills. Furthermore, the results displayed that nearly 73 percent of the instructors prefer the exams to be prepared by a testing office while more than half of the students expect them to be prepared by the instructor of the course. The results also revealed that both the students and the instructors believed it was necessary to use other test techniques in addition to the multiple-choice test technique commonly used in the exams. According to the results of the One-way ANOVA, the more successful the students are, the more satisfied they are with the exams’ different characteristics. As for the instructors, Nonparametric Kruskal-Wallis Test results indicated that there occurred no significant differences between instructors’ educational background and the objective test techniques that they use in their classrooms. Additionally, it was found out there were no significant differences between instructors’ educational background and their ideas on the objective test techniques that can be used in the exams. However, the more experienced the instructors are, the more efficient they find the exams prepared by the testing office. Another important finding was that although their order of preferring objective test techniques slightly differs, the first eight test techniques that the students and instructors preferred in the exams were completely same. The study concludes that both the students and the instructors have some doubts about the efficiency of the testing office’s current practices. Therefore, for more efficient exams, test constructors can include the eight objective test techniques [(1) multiple-choice questions, (2) matching, (3) ordering tasks, (4) completion, (5) true-false questions, (6) short-answer questions, (7) error correction and (8) word changing], which were commonly preferred by the instructors and the students, into the Compulsory English Language Course Exams. In addition to the centrally administered achievement tests of this course, instructors should use teacher-made achievement tests and take the scores that students get from these tests into consideration while assessing their learners’ language performance. Moreover, having a testing office with test constructors specialized just at testing will be a good idea for preparing better and more efficient tests.

A STUDY ON THE ENGLISH LANGUAGE TEACHERS'PREPARATION OF TESTS

efdergi.hacettepe.edu.tr

In this article the researcher has examined the current situation in test (a) construction: designing, structuring, developing, (b) administering, and (c) assessing the foreign language tests to see if we are still at the same point (traditional) and has given some suggestions on this indispensable issue. To collect the necessary data the 4 th year students doing their practicum studies at a state high school in Ankara under the supervision of the researcher are asked to collect one sample of each test (written or oral form) their mentors have been using to assess their foreign language students. The common characteristics of the test samples are scrutinized in terms of validity and reliability, language skills and areas including spelling, contextualization, time, typing, students' foreign language level (simple or complex structures), instructions, and backwash effect. Relying on the findings of the study some recommendations have been made for foreign language teachers.

Lecturers' Consideration in Developing Language Tests

EDUVELOP

This research aims to find out (1) the neglected variables of characteristics of a good test which is done by lecturers in developing their test, and (2) the ways the lecturers develop their own test in relation to the neglected variables. This research is explanatory mix method design. There are two variables in this research. They are lecturers' consideration and test development. The population of this research is 33 lecturers in which 18 from IAIN Parepare and 15 from UMPAR. The samples are taken by random technique in which 30 samples. The instruments of this research are questionnaire, interview and documentation. This research uses percentage technique to analyze the data. The results of this research show that (1) here are the sequences of the neglected variables in developing language test: reliability, appropriateness in difficulty, clarity, comprehensiveness, validity, transparency, appropriateness in time, and economic, and (2) From three interview sections, this research finds differences of the way lecturers develop their test. They develop their test through challenging, creative and spontaneous test.

High-Stakes Testing Washback: A Survey on the Effect of Iranian MA Entrance Examination on Teaching

faculty.ksu.edu.sa

High-stakes tests work efficiently to bring about changes. They affect the participants as well as process and product of an educational system. MA Entrance Examination in Iran is a case in point. It is primarily designed to screen the candidates for postgraduate studies. Nevertheless, its changes in the classroom, generally known as “washback” in applied linguistics, are often more than what the designers expect. This paper aimed at conducting a survey about the washback effect of MA Entrance Examination on teachers’ methodology and testing. 45 subjects, all of whom university professors, were selected using convenience random sampling. Then, a validated researcher-made questionnaire was administered. To have more reliable data, some were randomly selected to be interviewed so as to cross-check the data collected through questionnaire. The data analysis revealed that the majority of the subjects were positively, as they claimed, affected by the examination. Moreover, they were fully aware that their methodology and attitudes were gradually set to the demands of the examination.

Do Exam Aims and Content Reflect those of the Curriculum? An Evaluative Study

Research Paper , 2024

Language tests, particularly high-stakes language tests, are a powerful tool for evaluating educational outcomes, but their effectiveness hinges on how they are constructed. Failure to construct valid sound tests can negatively impact both teaching and lea rning. Therefore, continuous research is needed to investigate and evaluate such types of tests. This study examines the congruency between the aims and content of a high-stakes public EFL examination and the prescribed curriculum in Libyan schools. Document analysis was conducted on a sample of the studied test, focusing on its intended objectives and their reflection on the curriculum's goals and content. While the exam offered some practical advantages, the findings revealed a mismatch between the administered exam and the curriculum, and between the stated aims and the actual content of the exam itself. The assessment focused solely on grammar points and reading comprehension based on information cloned verbatim from the prescribed textbook. Writing was assessed indirectly through true-false responses to prompts involving structured or unstructured sentences. Notably, the exam entirely neglected listening and speaking skills. These findings suggest that the current examination system could hinder curriculum implementation and EFL education in Libyan schools. Therefore, the study calls for examination reform to ensure alignment between the tests and the English curriculum, ultimately promoting effective second language learning.

Test Qualities and Washback: An Analytical Study of Teacher-Made Achievement Tests at the Faculty of Languages and Translation University of Aden

A number of studies in the field of language testing emphasized the vital role of an achievement test positively or negatively on English language teaching and learning (washback). The study sets out to analyze the test papers of the teachers at the Faculty of Languages and Translation, University of Aden/Yemen, collect data on teachers' attitudes toward designing and administrating tests, investigate the kind of washback do teacher-made achievement tests have on students, investigate the kind of washback do teacher-made achievement tests on teachers and investigate the extent to what teacher-related factors play a role in washback. This study focused on the utilization of quantitative research approach. Triangulation was employed through conducting three data collection instruments: documents analysis, teacher questionnaire and student questionnaire. Evaluation Checklist, which adapted from previous studies, was used to analyze 32 teachers' test papers. In addition, the researcher conducted the wider population sampling. Accordingly, 42 questionnaires were distributed to 42 teachers. The total number of the third year students in the two departments is 477. Therefore, the researcher depended on the attendance sheet by which (50) students were selected randomly from each group of the two departments: English (Business) and Translation. In the case of this study, the inter-rater reliability was conducted for documents analysis using Spearman-Brown Coefficient. While the internal reliabilities were employed to the questionnaires’ items using Cronbach’s Alpha. The findings of the study have revealed that he results of the test papers analysis (documents analysis) have revealed that teacher-made achievement tests had the criterion of test reliability while they had not the criteria of test validity and test authenticity.  The results of the teacher questionnaire have revealed that in designing tests, teachers always consider: identifying the target language skills and/ abilities, determining number of test items, medium and time, determining test techniques/types of questions and determining scoring procedures. Whereas other teachers sometimes consider: describing test purpose (s), determining test topics and specifying criteria for success. In administrating tests, the majority of teachers expressed their agreement ''strongly agree'' to consider preparing the place of testing, arrangement of testing materials and equipment and determining the time of testing. Similarly, other teachers expressed their agreement ''agree'' to consider the preparation of the physical conditions in administrating tests. However; giving clear written/oral instructions and reading the instructions of test questions had lower percentages.  The results of questionnaires: student and teacher questionnaire have revealed that there was a positive washback '' agree'' of teacher-made tests on the development of students' language skills and abilities and students' motivation. However, most of the students expressed their disagreement to that teachers write notes on their mid￾semester test papers. Moreover, the results of students questionnaire have revealed that the texts that were used for listening and reading tests were not authentic in the way that encounter their needs as Translation and Business English students. The results of teacher questionnaire have revealed that there was a positive washback of teacher-made tests on teachers' teaching techniques, teaching materials and the use of test-taking strategies. After conducting a test, however; more than half of the teachers rarely gave the students feedback on areas they were weak in to help them improve in future.  The results of teacher questionnaire have revealed that teachers' beliefs played a role in washabck. However, more than half of the teachers expressed their disagreement to that they have enough background knowledge on various testing techniques. While a high majority of the teachers (38 out of 39) expressed their disagreement to that they have enough background knowledge on the essential requirements for designing good tests namely test reliability, test validity and test authenticity. With regard to the role of teachers' teaching experiences in washback, the results of teacher questionnaire and documents analysis have revealed that teachers' teaching experiences did not play a vital role in washback. Based on the results of the study, it is recommended that:  language testing and assessment course should be incorporated in the syllabus of teacher training programme for the undergraduate students of B.A. at English language departments/ Faculties of Education, University of Aden.  Faculty of Languages should execute remedial workshops for teachers to train them on English language testing and assessment .  For promoting a positive washback of achievement tests, teachers should use test￾taking strategies.  More focus should be on test content validity. Heads of departments should execute a remedial work for teachers by creating a continuous and systematic evaluation involving distribution of an execution sheet of the course contents on the prescribed time as well as observation of classroom practices.

Teachers in High-stakes Language Tests – Blessing or Curse

CASALC Review

The article deals with the involvement of language teachers in high-stakes testing from different perspectives. The theoretical part discusses the problems of language teachers as testers in a broader context, in connection with the importance of language assessment literacy as a part of professional development. In addition, it reviews results of some studies focusing on teachers involved in testing. The main part of the article is devoted to the results of two surveys intended to gather and interpret the teachers’ opinions on a standardized language examination used in the framework of NATO, which were conducted among the language teachers of authors’ institution, as well as among the language teachers at military language institutions in several foreign countries. Furthermore, the main part presents the results of the questionnaire concerning the opinions and approaches of the teachers towards the item writing training which they received to be able to create test items. The obj...