Monday, March 9, 2015

Dump DIBELS

DIBELS (Dynamic Indicators of Basic Early Literacy Skills) is an early reading assessment measure that is widely used in schools.  According to their web site DIBELS

Are a set of procedures and measures for assessing the acquisition of early literacy skills from kindergarten through sixth grade. They are designed to be short (one minute) fluency measures used to regularly monitor the development of early literacy and early reading skills.

In practice DIBELS is a set of one-minute tests of a student’s ability to name letters, segment phonemes, identify initial sounds in words, read nonsense words, read fluently and retell. The creators of DIBELS argue that student ability to perform these tasks in strictly timed situations predicts their future reading success or struggles.

DIBELS came to be widely used because it was closely tied to the Reading First and NCLB initiatives of the last 15 years. DIBELS fit nicely into the Reading First push for “scientifically researched” practices. The creators of DIBELS, a group of researchers out of the University of Oregon, were able to generate lots of experimental data showing DIBELS was a reliable instrument. Many school districts were forced to adopt DIBELS assessments in order to qualify for government funding.

But from the start DIBELS has generated controversy. A special education commissioner for the U. S. Department of Education named Ed Kame ‘enui, resigned after a Congressional investigation found that he had “gained significant financial benefit” by promoting DIBELS from his government position. Two other Department of Education employees were also implicated in the investigation. Perhaps more importantly, many, many highly respected literacy researchers have found that the impact of DIBELS has moved instruction away from what we know works for children.

P. David Pearson, one of the leading literacy experts in the country and a man known to avoid hyperbole and for taking a centrist view on issues related to literacy instruction had this to say about DIBELS:

I have decided to join that group of scholars and teachers and parents who are convinced that DIBELS is the worst thing to happen to the teaching of reading since the development of flash cards (Goodman, K. et. al. (2007) The Truth About Dibels).

In the same volume, literacy researcher Sandra Wilde found that while the DIBELS claims “to strongly predict whether individual children are likely to fail to learn to read. It just doesn’t.”

Also in The Truth About DIBELS, University of Arizona professor emeritus and long-time reading theorist Kenneth Goodman posits that

DIBELS is based upon a flawed view of the nature of the reading process and, because of this fundamental flaw, provides all who use it with a misrepresentation of reading development. It digs too deeply into the infrastructure of reading skill and process and comes up with a lot of bits and pieces but not the orchestrated whole of reading as a skilled human process.

In a technical report out of the Literacy Achievement Research Center, Pressley, et. al (2005) found that DIBELS

mis-predicts reading performance on other assessments much of the time, and at best is a measure of who reads quickly without regard to whether the reader comprehends what is read.”

What is it that makes DIBELS the “worst thing to happen to reading instruction since flash cards?” As Pearson sees it, the use of DIBELS in the schools has an undue influence on the curriculum, driving reading instruction to a focus on the little bits of reading and away from a focus on the whole of literacy instruction. Students are held accountable to the indicators of reading progress rather than actual reading progress and teachers are forced to instruct in ways that violate well-documented theories of development and broader curricular goals. In other words, DIBELS becomes the driver of the curriculum and the curriculum is narrowed in unproductive ways as a result.

Ultimately, Pearson says, DIBELS fails the test of consequential validity. In other words, the widespread employment of DIBELS has had dire consequences on the actual teaching of reading. Teachers have been forced through this test to focus on a narrow definition of the “stuff” of learning to read, rather than on the broader context of what reading actually is – the ability to make sense of squiggles on a page made by an author.  The consequences of DIBELS makes it unworthy to use as an assessment tool.

If DIBELS has become a scourge in your school or school district, I suggest you gather up the research cited here and question those who are foisting this highly flawed, and ultimately counterproductive, assessment practice on your students and fellow teachers.





No comments:

Post a Comment