Showing posts with label VAM. Show all posts
Showing posts with label VAM. Show all posts

Tuesday, February 21, 2012

Value Added Measurement: Wanted Professional Palm Reader

Want to bet on a horse? Put your trust in a bookie whose job it is to predict winners after studying potential to win based on breeding, training, and standings in recent races. Want to invest in the stockmarket? Put your trust in a stockbroker whose job it is to predict companies that show signs of being a good investment, with potential growth, and a stable financial structure.

Want to know about a child's school achievement. Put your trust in an algorithm, a formula which will predict how much progress the student should make based on a complicated equation of 10 factors, but do not ask how it works. Florida determined that socio-economics would not be included as one of the 10 predicting factors.

StateImpact Florida and the Miami Herald
went looking for some explanation on how it will work and they got this answer:
"No lay person, teacher or reporter can understand it. So just trust us."

This formula will be used in Florida as the basis for merit pay this way:
The formula is designed to predict how students will score on the state’s standardized exam—the FCAT. And then it adjusts teachers’ pay depending on how well their students measure up against that predicted score.

Until recently, for $190 Chinese parents signed their children up for "palm-reading tests that could allegedly tell a child's intelligence and professional aptitude." Palm-reading tests have been determined to be pseudoscience and Chinese educational authorities banned the practice.

"Predicting is not an equation."

Related posts:
SB736/HB7019: The trouble with value-added measurement
NUT Report: "We have to do something."

Student Data Collection: Purpose, Costs, Risks?

Wednesday, June 1, 2011

NUT Report: "We have to do something."

When asked reasonable questions and explanations regarding education reform initiatives, the replies are predictable. When it comes to merit pay based on value-added measurement (VAM), the rationale is described as:
1) We need to know the truth.
2) We have to do something.

In addition to the two recent studies reported here and here, another group of assessment experts from the Economic Policy Institute sent a letter to the NY Board of Regents showing evidence that VAM is unstable and prone to error. Among the concerns expressed, this team of experts noted the following:
Teachers’ ratings are affected by differences in the students who are assigned to them. Students are not randomly assigned to teachers – and statistical models cannot fully adjust for the fact that some teachers will have a disproportionate number of students who may be exceptionally difficult to teach (students with poor attendance, who are homeless, who have severe problems at home, etc.) and whose scores on traditional tests have unacceptably low validity (e.g. those who have special education needs or who are English language learners). All of these factors can create both misestimates of teachers’ effectiveness and disincentives for teachers to want to teach the neediest students, creating incentives for teachers to seek to teach those students those expected to make the most rapid gains and to avoid schools and classrooms serving struggling students.

These experts also cited a RAND Corporation study that concludes:
....the research base is currently insufficient to support the use of VAM for high-stakes decisions about individual teachers or schools. It is important that policymakers, practitioners, and VAM researchers work together, so that research is informed by the practical needs and constraints facing users of VAM and that implementation of the models is informed by an understanding of what inferences and decisions the research currently supports.

Nevertheless, the NY Board of Regents ignored the evidence and joined the wave of national experimentation with no reasonable rationale nor justification.

How many expert studies are required before education reformers respond to the legitimate concerns over this massive implementation effort? Unstable, unreliable data will not reveal the "truth" and "We have to do something" are not adequate explanations. Decades of doing "something" that does not work is irresponsible policy making.

The following experts signed the letter:
Eva Baker, Distinguished Professor, UCLA Graduate School of Education
Director, National Center for Research on Evaluation, Standards and Student Testing (CRESST)
President, World Educational Research Association, 2010-2012
Past President, American Educational Research Association

Linda Darling-Hammond, Charles E. Ducommun Professor of Education, Stanford University
Past President, American Educational Research Association
Executive Board Member, National Academy of Education

Edward Haertel, Vida Jacks Professor of Education, Stanford University
Chair, Board on Testing and Assessment, National Research Council
Vice-President, National Academy of Education
Past President, National Council on Measurement in Education

Helen F. Ladd, Edgar Thompson Professor of Public Policy and Professor of Economics, Sanford School of Public Policy, Duke University
President, Association of Public Policy and Management

Henry M. Levin, William Heard Kilpatrick Professor of Economics and Education, Teachers College, Columbia University
Past President, Evaluation Research Society
Past President, Comparative and International Education Society

Robert E. Linn, Professor Emeritus, University of Colorado at Boulder
Past President, American Educational Research Association
Past President, National Council on Measurement in Education

Aaron Pallas, Professor of Sociology and Education, Teachers College, Columbia University
Fellow, American Educational Research Association

Richard Shavelson, Dean Emeritus and Margaret Jacks Professor Emeritus, Stanford University
Past President, American Educational Research Association

Lorrie A. Shepard, Dean & Distinguished Professor, University of Colorado at Boulder
Past President, American Educational Research Association
Past President National Academy of Education
Past President National Council on Measurement in Education

Lee S. Shulman, Charles E. Ducommun Professor Emeritus, Stanford University
President Emeritus, Carnegie Foundation for the Advancement of Teaching
Past President, American Educational Research Association

Monday, May 30, 2011

NUT Report: Decades of test-based incentive programs do not yield expected results

Grumpy Educators reported on a study conducted by the National Research Council here. The widget with the full text remains available on the right side of the screen. Last week, another study was published. The National Research Council released the results of a study examining the decades of implementing high-stakes standardized testing and concludes the practice has not achieved the expected impact on student achievement.

Isn't it time to ask why the persistence in continuing to pour time, money, and effort into test-centric policies?

This study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation. The National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council make up the National Academies. They are private, nonprofit institutions that provide science, technology, and health policy advice under a congressional charter. The Research Council is the principal operating agency of the National Academy of Sciences and the National Academy of Engineering.

To read the entire report on this study, select the widget on the right at the top.

The following press release describes the study and its findings:

WASHINGTON — Despite being used for several decades, test-based incentives have not consistently generated positive effects on student achievement, says a new report from the National Research Council. The report examines evidence on incentive programs, which impose sanctions or offer rewards for students, teachers, or schools on the basis of students' test performance. Federal and state governments have increasingly relied on incentives in recent decades as a way to raise accountability in public education and in the hope of driving improvements in achievement.

School-level incentives -- like those of No Child Left Behind -- produce some of the larger effects among the programs studied, but the gains are concentrated in elementary grade mathematics and are small in comparison with the improvements the nation hopes to achieve, the report says. Evidence also suggests that high school exit exam programs, as currently implemented in many states, decrease the rate of high school graduation without increasing student achievement.

Policymakers should support the development and evaluation of promising new models that use test-based incentives in more sophisticated ways as one aspect of a richer accountability and improvement process, said the committee that wrote the report.

Incentives' Effects on Student Achievement


Attaching incentives to test scores can encourage teachers to focus narrowly on the material tested -- in other words, to "teach to the test" -- the report says. As a result, students' knowledge of the part of the subject matter that appears on the test may increase while their understanding of the untested portion may stay the same or even decrease, and the test scores may give an inflated picture of what students actually know with respect to the full range of content standards.

To control for any score inflation caused by teaching to the test, it is important to evaluate the effects of incentive programs not by looking at changes in the test scores tied to the incentives, but by looking at students' scores on "low stakes" tests -- such as the National Assessment of Educational Progress -- that are not linked to incentives and are therefore less likely to be inflated, the report says.

When evaluated using low-stakes tests, the overall effects on achievement tend to be small and are effectively zero for a number of incentives programs, the committee concluded. Even when evaluated using the tests attached to the incentives, a number of programs show only small effects.

Some incentives hold teachers or students accountable, while others affect whole schools. School-level incentives like those used in No Child Left Behind produce some of the larger achievement gains, the report says, but even these have an effect size of only around .08 standard deviations – the equivalent of moving a student currently performing at the 50th percentile to the 53rd percentile. For comparison, raising student performance in the U.S. to the level of the highest-performing nations would require a gain equivalent to a student climbing from the 50th to the 84th percentile. The committee noted, however, that although a .08 effect size is small, few other education interventions have shown greater gains.

Effects of High School Exit Exams


The study also examined evidence on the effects of high school exit exams, which are currently used by 25 states and typically involve tests in multiple subjects, all of which students must pass in order to graduate. This research suggests that such exams decrease the rate of high school graduation without improvements in student achievement as measured by low-stakes tests.

Broader Measures of Performance Needed

It is unreasonable to implement incentives tied to tests on a narrow range of content and then criticize teachers for narrowing their instruction to match the tests, said the committee. When incentives are used, the performance measures need to be broad enough to align with desired student outcomes. This means not only expanding the range of content covered by tests but also considering other student outcomes beyond a single test.

Policymakers and researchers should design and evaluate alternate approaches using test-based incentives, the committee said. Among the approaches proposed during current policy debates are those that would deny tenure to teachers whose students fail to meet a minimal level of test performance. Another proposal is to use the narrow information from tests to trigger a more intensive school evaluation that would consider a broader range of information and then provide support to help schools improve. The modest and variable benefits shown by incentive programs so far, however, means that all use of incentives should be rigorously evaluated to determine what works and what does not, said the committee.

In addition, it is important that research on and development of new incentive-based approaches does not displace investment in the development of other aspects of the education system – such as improvements in curricula and instructional methods -- that are important complements to the incentives themselves, the report cautions.

###

The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation. The National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council make up the National Academies. They are private, nonprofit institutions that provide science, technology, and health policy advice under a congressional charter. The Research Council is the principal operating agency of the National Academy of Sciences and the National Academy of Engineering. For more information, visit http://national-academies.org. A committee roster follows.

Contacts:
Sara Frueh, Media Relations Officer
Shaquanna Shields, Media Relations Assistant
Office of News and Public Information
202-334-2138; e-mail news@nas.edu

Pre-publication copies of Incentives and Test-Based Accountability in Education are available from the National Academies Press; tel. 202-334-3313 or 1-800-624-6242 or on the Internet at http://www.nap.edu. Reporters may obtain a copy from the Office of News and Public Information (contacts listed above).

NATIONAL RESEARCH COUNCIL
Division of Behavioral and Social Sciences and Education
Committee on Incentives and Test-Based Accountability

Michael Hout (chair)*
Professor and Natalie Cohen Sociology Chair
Department of Sociology
University of California
Berkeley

Dan Ariely
James B. Duke Professor of Psychology and Behavioral Economics
Fuqua School of Business
Duke University
Durham, N.C.

George P. Baker III
Herman C. Krannert Professor of Business Administration
Harvard Business School
Boston

Henry Braun
Boisi Professor of Education and Public Policy
Boston College
Chestnut Hill, Mass.

Anthony S. Bryk (until 2008)
President
Carnegie Foundation for the Advancement of Teaching
Stanford, Calif.

Edward L. Deci
Professor of Psychology;
Gowan Professor of Social Sciences; and
Director
Human Motivation Program
University of Rochester
Rochester, N.Y.

Christopher F. Edley Jr.
Professor and Dean
School of Law
University of California
Berkeley

Geno Flores
Deputy Superintendent
San Diego City Schools
San Diego

Carolyn J. Heinrich
Professor
LaFollette School of Public Affairs
College of Letters and Science
University of Wisconsin
Madison

Paul Hill
Director
Center on Reinventing Public Education, and
Professor
Daniel J. Evans School of Public Affairs
University of Washington
Seattle

Thomas J. Kane**
Professor of Education and Economics
Graduate School of Education
Harvard University , and
Deputy Director for Research and Data
Education Program
Bill and Melinda Gates Foundation
Seattle

Daniel M. Koretz
Professor
Graduate School of Education
Harvard University
Cambridge, Mass.

Kevin Lang
Professor
Department of Economics
Boston University
Boston

Susanna Loeb
Professor of Education and Business
Graduate School of Education
Stanford University
Stanford, Calif.

Michael Lovaglia
Professor of Sociology, and
Director
Center for the Study of Group Processes
Department of Sociology
University of Iowa
Iowa City

Lorrie A. Shepard
Dean and Distinguished Professor
School of Education
University of Colorado
Boulder

Brian Stecher
Senior Social Scientist and Associate Director
RAND Education
RAND Corp.
Santa Monica, Calif.

STAFF

Stuart W. Elliot
Study Director

* Member, National Academy of Sciences
** Was not able to participate in the final committee deliberations