Center for Greater Philadelphia
Operation Public Education
Theodore Hershberg

Value-Added Assessment

Used by a growing number of states, value-added assessment is a new way to measure teaching and learning. This method allows researchers to identify not only the progress made by individual students but also the extent to which individual teachers, schools, and districts have contributed to that progress.

Value-added assessment gives educators a powerful diagnostic tool for measuring the effect of pedagogy, curricula and professional development on academic achievement, and provides all K-12 stakeholders a fair and accurate foundation on which to build a new system of accountability. View a slide show overview of value added assessment.


What is value-added assessment?
Value-added assessment is a new way of analyzing test data that can measure teaching and learning. Based on a review of students' test score gains from previous grades, researchers can predict the amount of growth those students are likely to make in a given year. Thus, value-added assessment can show whether particular students - those taking a certain Algebra class, say - have made the expected amount of progress, have made less progress than expected, or have been stretched beyond what they could reasonably be expected to achieve. Using the same methods, one can look back over several years to measure the long-term impact that a particular teacher or school had on student achievement.


How is value-added assessment different from traditional measures of student performance?
Student performance on assessments can be measured in two very different ways, both of which are important. Achievement describes the absolute levels attained by students in their end-of-year tests. Growth, in contrast, describes the progress in test scores made over the school year.

In the past, students and schools have been ranked solely according to achievement. The problem with this method is that achievement is highly linked to the socioeconomic status of a student's family. For example, according to Educational Testing Service, SAT scores rise with every $10,000 of family income. This should not be surprising since all the variables that contribute to high-test scores correlate strongly with family income: good jobs, years of schooling, positive attitudes about education, the capacity to expose one's children to books and travel, and the development of considerable social and intellectual capital that wealthy students bring with them when they enter school.

In contrast, value-added assessment measures growth and answers the question: how much value did the school staff add to the students who live in its community? How, in effect, did they do with the hand society dealt them? If schools are to be judged fairly, it is important to understand this significant difference.


How does value-added assessment sort out the teachers' contributions from the students' contributions?
Because individual students rather than cohorts are traced over time, each student serves as his or her own "baseline" or control, which removes virtually all of the influence of the unvarying characteristics of the student, such as race or socioeconomic factors.

Test scores are projected for students and then compared to the scores they actually achieve at the end of the school year. Classroom scores that equal or exceed projected values suggest that instruction was highly effective. Conversely, scores that are mostly below projections suggest that the instruction was ineffective.

At the same time, this approach does recognize student-related factors and other extenuating circumstances. For instance, imagine that a student's performance falls far below projected scores, while other students in the same class, with comparable academic records, do make the progress they were expected to make. This would be taken as evidence of an external effect, related to the student's home environment or some other variable lying outside the range of a teacher's influence.


Does value-added assessment raise student achievement?
Not by itself. Value-added assessment is just a tool with which to measure progress. However, that tool can certainly be useful to people working to raise student achievement. Think of it as a stopwatch - it doesn't make people run any faster, but you can use it to time members of the track team, in order to decide how to maximize the strengths of each runner - determining who should run the anchor leg of the relay, how fast a miler should run the first lap, and what training regimens to implement - to achieve the team's overall goals.

Likewise, value-added assessment provides school leaders with rich diagnostic information, which they can use in many ways such as assigning personnel, allocating resources and identifying mentor teachers and coaches. Further, this tool can help states and school districts to design comprehensive accountability systems that can assess the impact that particular kinds of teaching, curriculum, and professional development have on academic achievement.


What diagnostic information can value-added provide educators?
With a value-added analysis, educators now have a tool that provides them with the ability to determine their instructional results, the focus of their instruction (identifying which students have benefited most) and their instructional impact (how effective it has been in providing students with a year's worth of growth from where they began the year). Student achievement by classroom, grade, subject, school or district can be displayed showing distinct patterns of growth for students of different achievement levels.


How can value-added assessment complement and improve the measurement of AYP requirements in No Child Left Behind?

  • It tracks individual students over time. NCLB's Adequate Yearly Progress requirements don't follow the same student from, say, fourth grade to fifth grade; rather, they compare this year's fourth graders with last year's fourth graders, whether or not the new cohort resembles the one from the previous year. In short, AYP can amount to an apples-to-oranges comparison. It cannot show the progress made by particular students or groups of students over time which is the only way to make valid comparisons of students' performance.
  • It encourages schools to raise the achievement of all students, not just the subset of students whose improvement will satisfy AYP goals.
  • It focuses attention on individual classrooms. Under NCLB, schools - rather than teachers and administrators - are held directly accountable for student achievement, and there are no rewards for success, only sanctions for failure. However, while struggling students are indeed found in classrooms of all types, data from Tennessee make unequivocally clear that they are not randomly distributed: they are found disproportionately in classrooms with ineffective instruction. If the focus is on struggling students rather than on the teachers who are providing ineffective instruction, scarce resources will be devoted to the symptoms rather than their underlying causes. When used at the classroom level, value-added assessment gives individual teachers and administrators specific data describing two key patterns - the focus and impact - of their instruction, allowing them to target interventions where they are needed.
  • It is a better measure of school improvement. Under NCLB, school progress is an all-or-nothing affair - either the school makes AYP or it doesn't. However, value-added assessment shows any amount of progress that a school has made, even if it falls short of the AYP threshold. It does not sugarcoat low-achievement, but it does acknowledge the actual steps - both small and large - that schools make.


By recognizing incremental progress, rather than insisting that all students reach a certain threshold, doesn't the value-added approach let educators off the hook, allowing them to paint a rosy picture of low-performing students?

Making a year's worth of progress is significant, and it is a target worth shooting for. In fact, if all we did was assure that every child achieved a year of growth every year, our students would make dramatic learning gains, of the magnitude NCLB is meant to encourage. However, growth by itself is not enough for it denies too many students the chance to reach proficiency; To be successful a school must ensure that all students reach an absolute standard. Without such a guarantee, the fear is that too many adults will continue to offer excuses as to why many students mostly those with disabilities, from poor families, with limited English proficiency, or who are racial minorities, fail to reach proficiency.

The challenge then is how to retain the current emphasis on ensuring that all students are able to perform at high levels while also allowing some flexibility and time for schools that must educate our lowest achieving and most poorly prepared students. One option would be to adopt a Growth to Standards model that maintains the requirement that students reach proficiency, but allows schools flexibility in meeting those standards by incorporating a high quality approach to measuring individual student growth.


What is the history of value-added?
Value-added was invented by statistician Dr. William Sanders while working in the field of agricultural genetics at the University of Tennessee. In the early 1980's, when Lamar Alexander was Governor of Tennessee, Sanders learned that the administration was searching for an objective measure by which schools and educators could be held accountable for student learning. The notion that standardized test results could be used to determine the effectiveness of teachers was and still is a highly controversial idea. Sanders and his team at the University of Tennessee agriculture school thought they could do it based on theories applied in agricultural genetics. They wrote a proposal to the governor and were granted the rights to all of the test data for all of the students in Knox County schools.

Their initial studies gained little attention. It wasn't until 1992, when a Tennessee Supreme Court order demanded a more equitable funding system for schools that a new interest in accountability surfaced and Sanders' formula began to attract interest. It became part of Tennessee's Educational Improvement Act that year and is still in use across the state today.

Dr. Sanders is now manager of value-added assessment and research for SAS Institute Inc. in Cary, N.C. He assumed the SAS position in June of 2000, upon retiring after more than 34 years as a professor and director of the University of Tennessee's Value-Added Research and Assessment Center. In addition to his value-added model, similar models have been developed by others across the country.


How many states and school districts are using value-added assessment?
Value-added assessment has been used statewide in Tennessee since 1992, and it has been mandated for use by all school districts in Pennsylvania and Ohio and several hundred school districts in 21 states.

New legislation in Arkansas and Minnesota calls for implementing a form of value-added measurement in those states, and the School Boards Associations in Iowa and New York are currently piloting a value-added program. Dallas and Seattle are the most prominent urban districts that use the value-added approach, along with a number of districts in Colorado, North Carolina, and Florida.

Click here for more information on states and districts that are using value-added or other high quality growth measures.


How can value-added assessment, combined with the sort of accountability system that OPE recommends, help my state or school district?
Value-added assessment and a comprehensive accountability framework can serve as a model for states and districts to:

  • Establish a politically viable and economically feasible system for evaluation, and compensation for individual teachers and administrators. By following individual students over time, value-added accounts for student background characteristics over which schools have no control and that tend to bias test results. This enables educators to identify not only the progress made by students, but also the extent to which individual teachers, schools and districts have contributed to it. OPE's accountability system uses a teacher's value-added score for half of his or her evaluation. The other half of the evaluation is done through observation conducted in a peer review process. By combining empirical value-added measures with observation, this comprehensive accountability plan offers a fair and effective method for evaluating and compensating educators based on performance.
  • Provide much needed professional development and support for educators. The OPE accountability system expands professional development to 12 days per year so teachers have ample time to gain new knowledge and to develop new skills. It also provides new teachers with mentors for several years, a much-needed reform given how little value new teachers currently add, and creates a new category of teacher coaches to work with their colleagues to improve their craft. Mentors and coaches will be drawn from the ranks of Advanced and Distinguished educators, but in the start-up years of the OPE system recruits will be drawn from the Career category until enough teachers move into the Advanced and Distinguished ranks.
  • Bolsters the morale of effective teachers working in low-income schools. Because value-added accounts for socioeconomic and demographic differences, its outcome measures reveal the extent to which educators have succeeded in helping their students move forward, regardless of where they started. When schools are ranked by their value-added scores rather than on the basis of raw test scores which are greatly influenced by family income, teachers can be defined as successful by virtue of having "stretched" their students beyond what could be reasonably expected based on their past academic achievement.
  • Strengthen school leadership. Because value-added helps identify outstanding teachers, it makes possible the recruitment of teacher coaches and mentors who can play a vitally important role in improving the quality of classroom instruction.

OPE's accountability plan is symmetrical - it treats administrators the same way it treats teachers. The OPE system evaluates administrators according to how effectively they promote high achievement for all students, use student-learning data to make decisions, and build a school culture of high standards and continuous professional development.

Under OPE's accountability system, administrators will be compensated according to a career ladder that recognizes their skills and accomplishments, their success in meeting Adequate Yearly Progress goals, and the value-added scores of their school or district. School leaders begin as Interns with a mentor administrator, then progress to Career stage and, if they demonstrate excellence, reach Distinguished status.


What is the downside? What are the risks of implementing value-added assessment as part of an accountability system?
There are a number of statistical safeguards built into the value-added methodology to ensure that educators will be judged fairly. Most important is the use of multiple years of data, so that educators are not penalized because they've had a personal crisis or temporary difficulty. Additionally, test score data are coupled with observation of their classroom skills and knowledge, further reducing the possibility of an unfair assessment.

As a final layer of protection, when a teacher's evaluation shows that he or she is below proficient in either the value-added or the observation portion of an evaluation, that teacher is referred for Peer Assistance and Review (PAR) to check for inconsistencies or consider extenuating circumstances to ensure that the teacher is an appropriate candidate for remediation.


What have value-added studies shown about the impact a single teacher can have on student performance?
By implementing a value-added approach, states can ensure that their students receive the quality teaching they need to increase their odds for success. William Sanders' research in Tennessee indicates that fifth grade students who had three very effective teachers in a row gained 50 percentile points more on the state's assessment than students who had three ineffective teachers. A similar study in Dallas, using a different test and a different value-added methodology, found identical results. Additional research conducted by Dr. June Rivers, associate director of EVAAS® distributors of the Sanders' value-added model, the indicates that the chances for fourth-graders in the bottom quartile of performance to pass the state's high-stakes exit exam in ninth grade were less than 15 percent if their fifth, sixth, seventh and eighth grade teachers were drawn from the bottom 25 percent of the teacher pool (as measured by value-added), but a 60 percent chance of passing if they had four teachers drawn from the top 25 percent.


How much can we expect improvement to accelerate if there are several high performing teachers in the same school? Are good teachers penalized because they don't have other good teachers around them?
Based on Tennessee's experience where value-added has been used in all schools for over a decade, peer influences have not been shown to have a significant influence on a teacher's value-added score. In other words, good teachers excel regardless of where or with whom they teach.

We do know that good teachers are not penalized for having ineffective colleagues because the past effects of these teachers are considered in calculating current value-added scores. In addition, our plan calls for extensive professional development; assistance from mentors and coaches and other support to help all teachers improve their craft.

And remember, value-added is only part of an evaluation in the OPE system. Because multiple measures are used to evaluate educators, factors such as peer influences that could have some impact on value-added scores will not significantly affect a teacher's or an administrator's overall evaluation.


How does student mobility affect the ability to determine value-added progress?
High migration is not a problem for a state-based value-added system because students can be traced from district to district within the state; thus their scores can be attributed to the teachers who taught them for only the proportion of the school year they were in their classes.


There is a presumption that good teachers are doing something that can be replicated, and therefore it is intrinsically fair to assume that other teachers can do those things, too. However, if there are only a few teachers who are making substantial gains, does this suggest that the value-added measure will only be identifying superstars not to raise everyone's performance across the board?
If ordinary students can learn how to achieve at much higher levels, then it should be reasonable to expect all teachers should be able to achieve at much higher levels. Value-added provides rich diagnostic data allowing all teachers to see the impact and the focus of their individual instruction. Once teachers understand where and why they have been effective they can begin sharing best practices with their colleagues. Understanding areas of strength can correct areas of weakness and help all educators improve their craft. Ability is differentially distributed among adults just as it is among children. Everyone can perform at higher levels if they are provided appropriate resources and instruction.


If a teacher has 25 students and a number of low-performing students drop out during the beginning of the year, won't my ratings be raised?
The level of student performance is not an issue in an individual educator's value-added score because what is being measured is growth, not absolute achievement. Value-added assessment measures the difference between a student's projected score - which is based on past performance - and his or her actual score. Therefore, it doesn't matter what the mix of students is at the start of the year or if any specific students (whether they are previously low achievers or high achievers) drop out.

© 2004 Center for Greater Philadelphia