In an
earlier post that can be found here,
I argued for a more enlightened approach to teacher evaluation that recognized
the value of each teacher and aimed at feedback that is both useful and led to
further professional development. Such an approach would include observations,
teacher collaboration, and systematic feedback, as well as artifacts of
instruction to include pupil progress indicators.
In this
post, I would like to add some detail to that proposal. Specifically, I would
like to look at what student progress
indicators make sense as a part of a teacher evaluation.
The
corporate reformers of education are in love with Value Added Measures (VAMs)
as a central pupil progress indicator. Value Added Measures attempt to
determine the effectiveness of the teacher through student performance on
standardized tests. The Gates Foundation’s ambitious and flawed study of teacher
effectiveness called for one-third of the measure of teacher effectiveness to
be determined through VAMs.
Research, however, does not support
the use of VAMs for any significant part of a teacher evaluation. A report
prepared for the Governor’s Task Force on Teacher Evaluation in New Jersey by EQuATE
(2011), surveyed the research on VAMs and concluded the following.
Research studies show that the teacher’s
effect on value-added scores, based on [standardized] tests, accounts for only
3-4 percent of the variation. Fully 90 percent of the variation in VAMs is
attributable to student characteristics and the interaction of
learning/test-taking styles with the instruments used to measure achievement. To ascribe a weight to
this measure that exceeds its explanatory power would be malpractice at best.
Linda Darling Hammond, et. al, (2011) cited the following
concerns about VAMs.
1. Value-Added Models of teacher effectiveness are highly unstable.
2. Teachers’ Value-Added
Ratings are significantly affected by differences in the students who are assigned to them.
3. Value-Added Ratings cannot disentangle the many influences on student progress.
If VAMs are a notably unreliable measure of student progress,
what can we suggest that would provide a better picture of student progress?
Here is one person’s list of possible artifacts of student growth.
·
Samples of student work showing growth over time
·
Common or locally developed assessments
·
Student work samples scored on common or locally
developed rubrics
·
Questionnaires, Checklists or rating scales to measure
non-cognitive growth
Examples of student work showing growth over time can include
student writing, student journals, classwork or homework assignments, pre- and
post-performance on teacher developed quizzes and tests, even sample pages from
books read at the beginning of the year and the end of the year.
Examples of common assessments could include establishing a
reading level using a recognized format such as DRA or Benchmark, or assessments
developed at grade or department level designed for all students in a
particular grade or course.
Rubric scored work may employ national or state rubrics used
for determining writing quality or locally developed rubrics designed to assess
student ability in a range of performance tasks.
Ideally, measures developed to assess non-cognitive factors
related to student growth can be developed locally, but many commercially
developed questionnaires and checklists are available.
In this vision of an evaluation model, the teacher takes the
lead in gathering materials over time that demonstrate student growth in the
classroom. This material is then shared in a collegial give and take with
supervisors. The teacher has the opportunity to demonstrate effectiveness and
the supervisor has the opportunity to provide feedback and make recommendations
for continued growth.
Does this model demand time of the teacher and the supervisor?
Absolutely. Does it demand time for teacher colleagues to develop common
assessments? Yes. Is it idealistic? Perhaps. More importantly though, it values
the classroom teacher as the best observer of pupil progress, the local
supervisor as the key audience for this information and provides a more
reliable measure of teacher effectiveness than that darling of the reformers,
the VAM. Most importantly, these suggestions have the potential to impact
student learning in a positive way.
References
EQuATE (2011) Creating a Better System: Recommendations for
a Systemic Approach to Improving Educator Effectiveness. Report Delivered to the New Jersey Governor’s
Task Force on Teacher Evaluation.
Darling-Hammond, et. al (2011) Getting Teacher Evaluation Right: A Background Paper for Policy Makers.
Research Briefing, AERA
No comments:
Post a Comment