http://student.collegeboard.org/?student |
I want to make sure that I say, up front, that the staff here tried to take these field tests very seriously. We tried to get the students "excited" (if that's even possible) about taking them. We went through all of the steps you do when you are administering the test as it is required by law. The one thing we could not control was how the students would act during the testing.
Student conduct on field testing drives me crazy and makes me do things like write this post. As someone with a degree in mathematics (specifically statistics), I question the validity of data obtained from this "random sample" of students taking the tests. Yes, you can randomly select schools to do the field testing. But the data will be incredibly skewed by the attitudes of the test takers at those schools. And, high schoolers WILL skew data if they decide to do that.
Anyone who has seen Freshmen take a standardized test knows that there will be a certain percentage of students who will "christmas tree" the test. Christmas treeing is quite fascinating. Some students will select a letter to bubble in, say an A, and the entire test will be that letter. Most students just bubble in A,B,C,D,A,... so there is a beautiful pattern of diagonal dots. Others who are more imaginative will turn the answer sheet sideways and will try to spell out a message such as THIS TEST SUCKS with the dots.
In Texas students get it in their head that the only test that "matters" is the tests they take as Juniors. That allows for a huge (still less than 10% ?) percentage of students christmas treeing as Freshmen. That number, from experience not official data, is cut in half during the sophomore tests. But there are a LOT of students doing this and it effects the score schools receive for the tests. When this happens throughout the state the end results from school to school are statistically relevant and so we can justify the scoring that is used by the state to grade the school.
Now when you place the fact that a percentage of high schoolers WILL not take a test seriously if it "doesn't count." And, you then make that test a four hour test where they have to read passages and write essays and the "give-a-darn" (GAD) factor drops to a minimum level. When the GAD reaches a critically low level some interesting things can happen.
Take, for example, the student who christmas treed the entire test answer book even though the book was for two different tests and the test had a line that said "STOP and turn your test into the administrator" after the first test. What was he thinking? He was thinking "I want to finish this test as soon as possible so I can get out of here." Or the students who were discussing (after the test had ended) what they had written. As I was telling them that they can not discuss the test I heard one say that they had written Taylor Swift lyrics as their essay. What they were doing was imaginatively writing essays that meant something to them - no matter what the prompt was about. (But it DOES make you wonder what the prompt was - was it really that bad or did they never even read it?)
We have pretty darn good students. They perform at a high level when they are asked and when the result of their efforts is meaningful to them. If our school could be graded by the results of tests on every subject except math we would have received the highest rating given to schools in Texas in 2011. These kids know how to take standardized tests and they know how to do well on them.
So, how much did it "cost" to take the field test? Well there were tests and answer booklets printed for our entire Junior class. The Juniors lost about 5 or 6 hours of instructional time over two separate days. Teachers were out of their classrooms monitoring the testing and administrators were involved with testing instead of being available for students, parents, and teachers. Now that the tests have been given they will have to be graded - how much of the contract is taken up by that cost?
The entirety of the cost can only be estimated. But the bottom line is that whatever dollar amount you want to assign to this field test has to be evaluated towards the validity of the test results. I would estimate that between 20 and 40% of our students ended up not completing the test in a serious fashion. I would also be bold enough to say that another 10 to 20% wanted to just get it over as fast as possible. The end result is I can safely say that at least 50% of the students gave less than their full effort on that field test. So how relevant will the data be from this test? Not relevant at all and that is money that has been totally wasted.