Friday, December 03, 2004

An Inside Look at Standardized Tests...

I was released from teaching today to attend a training session on how to administer the performance section of the state's intermediate level science exam. This section - which counts for 15% of the total score - includes three 15-minute stations. Each requires the students to use scientific instruments, like the microscope and triple beam balance, along with math and reasoning skills, to answer a series of questions. I will be test coordinator for my school, and this is the first year we have to give the test, since it is our first year with an eighth grade class.

The training started out well but got a little slow by the end of the day. We practiced setting up the three stations and then actually took the test. Then we broke for lunch, and when we returned we practiced scoring the exam.

I like this part of the ILS exam. I think the questions are reasonable and encourage schools to include real science instruction in their curricula. Yet again, hearing other teachers talk about the ways their principals disregard science instruction, I realized how lucky I am to be in a school that values and prioritizes science, and how appalling it is that we need to be "activist" about promoting our discipline. Good science instruction should be a given, not something we have to fight for!

Scoring the exam was a real eye-opener. On the one hand, it reconfirmed my opinion that the exam is fairly sensible and grade-appropriate, not the standardized-test monster we often hear about... On the other hand, some aspects of it are not at all sensible or reasonable!

The scoring guide allows for a fair amount of error on the part of students, which is both good and bad. Building in room for measurement, rounding, and estimation error seems sensible to me. The way the exam works is that the test coordinator sets up a room with 5 of each of the three stations. Fifteen students test at one time. They rotate through the three lab stations. Since they are measuring organisms, finding the mass of objects, etc., it is important that the test coordinator do the lab at each station, because the correct answers will not be consistent from station to station due to small differences in the specimens, objects, and instruments. So it is clearly fair to allow for some variation between the student's answer and the answer measured by the test coordinator.

On the other hand, some questions build in so much room for error that it seems like the standards are absurdly low, or have built in room for error when it seems that students should simply get marked right or wrong, given that they are doing basic arithmetic. Perhaps a small amount of room could be allowed for differences in rounding off of the answer, but the exam scoring guide allows for much more than that.

And then there are questions where the detail expected in the answer is much greater than on any other question on the test, but for no apparent reason. I can't go into more detail here without divulging specific questions, unfortunately.

The scoring is going to take a long time, since if the student gets the first part of a question wrong, you have to score the rest of the questions for that station based on the student's answer, so that one mistake doesn't cost them the entire value of the station. My school is small, so it will probably take three of us only a couple of hours to score the exams, but heaven help people in large schools! Schools have the option of paying teachers to do the scoring after school. I am going to push for this in my school, since it seems extremely unfair for the state to mandate a test, then expect it to get marked during teachers' prep periods. Really, the state should be paying us to score them, but the money is supposed to come out of the school's existing budget. Inevitably, some schools will skimp and demand that teachers do the scoring during their free time during the school day. Other schools release teachers for several days to score exams while their students have subs.... It is this sort of thing that infuriates educators when different levels of government mandate more tests; too often, they do not take into account the enormous amounts of work that go into setting up, administering, and then scoring the exams.

Someone in the room who gave the test last year mentioned that he still had not received his results for the written portion of the test. That part of the test is given in June; note that it is now December! The woman running the training told him that his school's general test coordinator should be able to access the results. Poor communication of this sort is all-too-common. Conscientious teachers want to look at and analyze our students' test results, but the results become mired in bureaucracy and never reach us, or reach us halfway into the following school year, when we have already lost precious months of preparation time. Turns out that for the ILS exam, the state does not provide item analysis. That means that a science teacher or department chair has no way of knowing whether the students are doing better on some types of questions than others, or in some fields of science than others. Clearly, this makes it hard to use the exams as feedback for improving our instruction!

I write all of this hoping that it will give people a slightly more nuanced view of how standardized testing actually works in the schools, at least in a large system like New York. Standardized tests are not always a bad thing, but there are many details that need to be done right to make testing a good thing!

0 Comments:

Post a Comment

<< Home