Topic: Purposes and Types of Language Test Date: 7 December 2006 7:53 PM
Subject: Pre-packaged test Author: Mondy, Steven
Recently, I was interested to know the effect of a pre-made packaged test on students within our college. I have a class of Chinese students studying within a basic reading course. I’m using the Oxford Dominoes starter series, and they have downloadable book tests (Multiple-choice items) for each of the books. I initially imagined that the tests put out by the publisher had been analyzed and tested (although, here in Japan it’s somewhat a dangerous assumption to make with publishers, as there seems to be frequent mistakes within textbooks). Anyway, after correcting the tests, there seemed to be a definite pattern emerging with relation to the student’s overall results. I saw that what appeared to be a well-formed test that was directly related to the material that we were studying, produced shockingly low results from students. I tried to think about why this was so?
Looking at the test, I couldn’t see any particularly strange or misleading items. The test was divided into sections, “Setting, Characters, Dialog within the book, Vocabulary, and Plot”. All multiple-choice items had four alternatives as suggested in many of our textbooks, which had fairly straightforward distracters. I was fairly confident that it was a reliable test.
However, after getting the results back, I did start to wonder about its validity within the context of this class, and the test’s suitability for our student’s purposes for studying English. The reliability of this test might have been quite high, yet for some reason my students did not do well…
It may have something to do with why the students are actually studying this course in the first place. Chinese students come to Japan, hoping to skip from our course to some university place in Japan. While they are here, they are required to study in (1) Programs that encourage general English learning and (2) Activities that prepare them for university examinations, and other standardized tests.
The reading course above comes under the first heading, and is a required course for them.
Whether right or wrong, I made the following observations:
~ The test may have lacked Validity: in that the objectives of the school, the teacher (me) and the test may not exactly match the goals of the students. These students may not be able to see the purpose of this kind of course, and may be highly resistant to the class and the test itself, therefore invalidating any test, even before it is given.
~ Reliable and well thought-out tests may not produce the same results for different groups of students.
~ Motivation: I observed that many of the students just gave up, and finished the test early because they were either not ready for it, or didn’t have the ability to do it. Either way, they did not feel compelled to continue with the test.
~ Test format: The test that was prepared for these lower level students was solely comprised of written multiple-choice items. There were no pictures or varying techniques such as cloze or matching that would make it possible for students of differing capabilities to have more chances. Also, it was quite clear that although the length of the test did not disadvantage the students (50 items in 50 minutes, which complies with the 1 minute/1question rule), having 50 questions of the same kind did.
Having undertaken this test, and collating the results, I can see that there are some fundamental problems with validity. In the future, I will be wearier of doing tests that aren’t more closely related to the specific needs of my students. That also means I need to be careful of pre-packaged tests that on the surface look fine, but can end up costing in terms of the harmful effects. I will also be careful to provide more of a variety of techniques within a test for differing learning styles.
Steven Mondy
No comments:
Post a Comment