Also missing from the model is any independent evidence that thetest-effective teachers are perceived as generally effective byparents, administrators and other teachers. Everything hangs ontest scores.
Where such tests are important in accountability schemes, teachingto the test will be even more prevalent. Important learning thatis not and often cannot be measured with multiple-choice testswill not be counted in the determination of value gained. Forschools that focus on more expansive and richer areas of learning,"value added" cannot represent what the school is trying to accomplish.In short, while ostensibly a means to assess progress in learning,"value added" reinforces the most narrowing aspect of testing- thereby reducing, not increasing, real value.
There are also possible problems with the model itself. We mustsay "possible" because Sanders has refused to tell anyone howit works. Indeed, he has contracted with a private firm to providehis analysis to school systems for a fee. This has greatly angeredassessment experts who would like to know how the model worksin order to improve on it, debunk it, or simply explore its possibilitiesand limits as is customary in the open world of research.
For instance, Sanders claims that because his model rests on priortest scores, it removes the impact of socio-economic status. Thatis, because it calculates changes in scores from, say, grade threeto grade four, he argues it is measuring changes independent ofwhere the child started from. This is debatable - some would arguethat the effects of socio-economic status are ongoing. Withoutaccess to the model, though, no debate is possible.
Suspicions about the utility of this particular value-added modelare increased by reports that crucial teacher quality statisticsare unstable. A teacher who is very effective one year, mightnot be the next year. This raises fundamental and vexing questionsabout the model's accuracy. Again, in the absence of open andscholarly debate, these questions cannot be addressed.