SB736/HB7019 intends to apply value-added measurement (VAM) to connect student achievement on a test as a performance indicator of the teacher, or a poor student score = poor teaching. VAM uses statistical tools to calculate the contributions a teacher makes to student achievement gains. The formula for the calculation includes factors the developer chooses to include. What do experts have to say about the value of value-added measurement?
1) Jim Angermeyr, Director of Research, Evaluation & Testing for Bloomington Public Schools and "one of the designers of a widely respected value-added test lots of Minnesota schoolchildren take two or three times a year" was interviewed by Beth Hawkins. In the article,
"Do 'value-added' teacher data really add value?", Angermeyr is described as "something of a standardized testing skeptic. He believes that economists tend to believe in using value-added data in evaluation. Educators and psychometricians, not so much."
“It’s not necessarily that the methodologies are wrong,” he said. “It’s that the inferences we’re drawing can be wrong.”
"The kids are the greatest of the variables, of course. The tests may tell you a student is reading better or sliding in math, but they don’t tell you whether she spent the summer with a tutor or he is so young the test isn’t as accurate as it would be in an older child.
Nor is the same test used from year to year. A particular student or teacher may fare better on a test closely normed with curriculum vs. one aligned with a set of knowledge-based standards."
“You leave out a lot of the potential variables,” Angermeyr said.
“They’re just not at the point where we should use them to make decisions about jobs.”
2) The National Research Council and the National Academy of Education gathered experts in the this field and held a workshop titled "Getting Value Out of Value Added." There was general agreement by participants that there is no single statistical model and still a work in progress for applicability when measuring teacher quality.
Discussion resulted in these additional conclusions:
- Results generated by existing models have a high degree of instability.
- Good results require good tests and good test results.
- Experts find it useful for low stakes use to identify areas that need improvement.
- Experts recommend against using it for high stakes such as, teacher pay.
According to Henry Braun, the group determined the following:
"To nobody’s surprise, there is not one dominant VAM. Each major class of models has shortcomings, there is no consensus on the best approaches, and little work has been done on synthesizing the best aspects of each approach. There are questions about the accuracy and stability of value-added estimates of schools, teachers, or program effects. More needs to be learned about how these properties differ, using different value-added techniques and under different conditions. Most of the workshop participants argued that steps need to be taken to improve accuracy if the estimates are to be used as a primary indicator for high-stakes decisions; rather, value-added estimates should best be used in combination with other indicators. But most thought that the degree of precision and stability does seem sufficient to justify low-stakes uses of value-added results for research, evaluation, or improvement when there are no serious consequences for individual teachers, administrators, or students." (p.54)
While Florida plunges into creating dozens of new tests,
North Carolina's legislature in bipartisan agreement is sending a bill to the Governor to end some end-of-course tests, maintaining those to meet federal requirements and to
measure student achievement. Two years ago, they voted to end a few others. This legislature wishes to stop paying for so many tests that are both expensive and failing to yield the returns once thought beneficial.
Concerns over the costs for implementing SB736/HB7019 by Florida Senators and by school boards are being reported. In the end, this is an unfunded mandate using an experimental statistical model, and an expensive test development and implementation scheme that extends far beyond the reach of RT3 dollars. Legislators remain silent on the issue of costs and cost benefits. The public has a right to know.
To read the "Getting Value Out Of Value Added" report for free, go to the widget on the right side of this page, select the icon that looks like an open book with the word Read underneath, and then Open Book in the small screen area. The document will open so you can read it easily.
The experts in the National Research Council and the National Academy of Education in this conference included:
Rita Ahrens, Education Policy Studies
Joan Auchter, National Board for Professional Teaching Standards
Terri Baker, Center for Education, The National Academies
Dale Ballou, Vanderbilt University
Henry Braun, Boston College
Derek Briggs, University of Colorado at Boulder
Tom Broitman, National Board for Professional Teaching Standards
Alice Cain, House Committee on Education and Labor
Duncan Chaplin, Mathematica Policy Research
Naomi Chudowsky, Center for Education, The National Academies
Pat DeVito, AE Concepts
Beverly Donohue, New Visions for Public Schools
Karen Douglas, International Reading Association
Kelly Duncan, Center for Education, The National Academies
John Q. Easton, Consortium on Chicago School Research
Stuart W. Elliott, Center for Education, The National Academies
Maria Ferrão, Universidade da Beira Interior, Portugal
Rebecca Fitch, Office of Civil Rights
Shannon Fox, National Board for Professional Teaching Standards
Jianbin Fu, Educational Testing Service
Adam Gamoran, University of Wisconsin–Madison
Karen Golembeski, National Association for Learning Disabilities
Robert Gordon, Center for American Progress
Jeffrey Grigg, University of Wisconsin
Victoria Hammer, Department of Education
Jane Hannaway, Education Policy Center
Patricia Harvey, Center for Education, The National Academies
Lloyd Horwich, House Committee on Education and Labor
Lindsey Hunsicker, Committee on Health, Education, Labor, and Pensions
Ben Jensen, Organisation for Economic Co-operation and Development
Ashish Jha, Harvard School of Public Health
Moshe Justman, Ben Gurion University, Israel
Laura Kaloi, National Association for Learning Disabilities
Michael Kane, National Conference of Bar Examiners
Judith Koenig, Center for Education, The National Academies
Michael J. Kolen, University of Iowa
Adam Korobow, LMI Research Institute
Helen F. Ladd, Duke University
Kevin Lang, Boston University
Sharon Lewis, House Committee on Education and Labor
Valerie Link, Educational Testing Service
Dane Linn, National Governors Association
Robert L. Linn, University of Colorado at Boulder
J.R. Lockwood, RAND Corporation
Angela Mannici, American Federation of Teachers
Scott Marion, National Center for the Improvement of Educational Assessment
Daniel F. McCaffrey, RAND Corporation
Alexis Miller, LMI Research Institute
Raegen Miller, Center for American Progress
John Papay, Harvard University
Liz Potamites, Mathematica Policy Research
Ali Protik, Mathematica Policy Research
Sean Reardon, Stanford University
Mark D. Reckase, Michigan State University
Andre Rupp, University of Maryland
Sheila Schultz, HumRRO
Lorrie Shepard, University of Colorado at Boulder
Judith Singer, Harvard University
Andrea Solarz, Director of Research Initiatives, National Academy of Education
Gerald Sroufe, American Educational Research Association
Brian Stecher, RAND Corporation
Justin Stone, American Federation of Teachers
David Wakelyn, National Governors Association
Greg White, Executive Director, National Academy of Education
J. Douglas Willms, University of New Brunswick
Mark Wilson, University of California, Berkeley
Laurie Wise, HumRRO