Tuesday, March 24, 2015

Do Readers Shift from Learning to Read to Reading to Learn?

My brief and mostly relaxing mini-vacation in Florida was interrupted last Monday when the hotel slid a complimentary copy of USA Today under my door. Contained therein was an article by the Thomas Fordham Foundation’s Robert Pondiscio entitled, Shifting from Learning to Read to Reading to Learn.”

As education reform cheerleaders go, Pondiscio is far from the worst. He recognizes some of the educational realities that others ignore, including that high-stakes tests have a greater impact on what gets taught than do standards. He also recognizes that any standardized test of reading comprehension is not so much a reading test as a test of background knowledge. He says

But a reading comprehension test is a de facto test of background knowledge and vocabulary acquired in school and out. It doesn’t take very many missing bits of background knowledge and vocabulary to rob a reading passage of meaning.

Pondiscio even speculates that we should do away with high stakes reading tests beyond grade three. I can certainly agree with him there, although I would do away with them altogether.

When it comes to a discussion of literacy instruction, however, Pondiscio and I part company. In the USA today article Pondiscio says that he wants to clear up some “common misconceptions about reading.” He then perpetuates some common misconceptions about reading.

Pondiscio sees reading as two distinct processes – decoding – which he defines as the skill of matching sounds to letters and learning to blend them, and reading comprehension, which is making sense of the text and which he sees as “intimately entwined with background knowledge and vocabulary.”

Pondiscio sets up a false dichotomy. From the very earliest stages of reading children lean on their background knowledge and vocabulary, not only to make sense of text, but to decode. I explained the limits of “sounding it out” as a decoding strategy in a previous post. Beginning readers coordinate their phonics knowledge with their oral language knowledge and their efforts to make sense of what they are reading to decode. Here is the example I used in the previous post:

            How would you complete this sentence?

            The boy studied for the big test all ___________.

            Chances are you have generated words like the following: day, night, evening, afternoon, morning, week.

            Notice that all the words generated were nouns. All native and proficient speakers of English know that a noun will come in this place in the sentence because this is Standard English syntax. Only a noun will "sound right."

            Notice also that all the words you generated to end this sentence are nouns of time. Because we expect English to "make sense" we use our semantic understanding of the language to predict a meaningful word for the context.

            Now suppose that I showed the sentence this way:

            The boy studied for the big test all n__________.

            Immediately you are likely to say "night", because it looks right, sounds right and makes sense. Notice also that if you tried "sounding out" this word, you would run into trouble because the "gh" is silent. 

From this more complete view of decoding, we see the reader as a problem solver drawing on many pieces of information, including comprehension of the text up to this point, to decode. So, even in learning to decode, children who have rich and broad background knowledge and who are native speakers of English have an advantage that is not unlike the advantage that they would have in a test of comprehension.

When it comes to what he considers the second part of this dichotomy, reading comprehension, Pondiscio argues that reading comprehension is not a skill.

To understand Pondiscio’s stance on reading comprehension, it is important to have some background knowledge on him. Before joining the Fordham Foundation, he worked as Director of Communications for the Core Knowledge Foundation. The Core Knowledge Foundation, of course, is the organization founded by E. D. Hirsch and dedicated to the proposition that what is missing from American education is lots of knowledge of “stuff.” The Core Knowledge Foundation is devoted to the idea that kids need to learn lots of “stuff” and this “stuff” is the essence of education.

So it is not surprising that Pondiscio’s view of reading comprehension is dominated by the idea that you need to know lots of stuff to read and comprehend well. He is not entirely wrong about this, although I suspect that we could all have some good arguments around what “stuff” we should all know and that in the age of the internet whether it is more important to acquire knowledge of stuff or knowledge about how to find and critically analyze all the “stuff” that is out there.

Where he is wrong is in asserting that reading comprehension is not a skill, because it most certainly is. Pondiscio is partly right when he asserts that reading comprehension is greatly impacted by background knowledge, something that was not acknowledged by the “chief architects” of the Common Core. But reading comprehension is also partly a skill and it is a skill that can and should be directly taught (See Fielding and Pearson here and Duke and Pearson here).

What are the skills of reading comprehension that we should be directly teaching? According to Duke and Pearson they include the following.

·         preview and prediction
·         think aloud (monitoring for understanding and clarifying)
·         visualization
·         text structure
·         summarization
·         questioning
·         determining vocabulary from context

All of our good direct instruction in reading comprehension strategies cannot take the place of lots of opportunities for children to read widely in a variety of texts. On this point Pondiscio and I agree. Reading widely is critical to building the background knowledge for further reading. I think this wide reading is likely to be more productive if we also help students do it more skillfully through informed instruction in the skills related to comprehending text.

It is more useful to think of reading not as a dichotomy divided into the “skill” of decoding and a content knowledge driven comprehension, but rather as a unified and active search for meaning practiced at various levels of proficiency by children who are developing both the skill and the will to read. Children don’t shift from learning to read to reading to learn as Pondiscio suggests; they actively read to make sense of what they read from the first time they pick up a book. This effort to make sense drives the development of decoding skills, comprehension strategies and content knowledge.







Saturday, March 21, 2015

The Tyranny of the One Right Answer

My son was a bit of an outside of the box thinker; a type that has never been very welcome in the American public high school. So, we were resigned to the fact that he would often struggle in school, particularly in math, which he hated. Our concern came to a head, though, when it became apparent that the boy might not graduate from high school because he could not pass algebra. Dutifully, I sat down with him and asked what we could possibly do to make sure he passed this time. My son, never one to answer a direct question with a direct answer, responded, “You know what the problem with math is, Dad?”

“No, what is it?”

“It’s this obsession with the one right answer.”

I laughed in spite of myself. I, of course, explained that sometimes in life you need that one right answer. If you are building a bridge or balancing a check book or trying to locate a place on a map, you need that one right answer. But I also took some pride in the response. After all, I had become a social studies teacher, in part, because I enjoyed the give and take of a good discussion about issues that had many possible, and no absolutely correct, answers.

I am reminded of this story today, because I am reading about the impact of the country’s current obsession with standardized tests on creativity, innovation and divergent thinking. By far the most important book on this topic is Who's Afraid of the Big Bad Dragon? Why China has the Best (And Worst) Educational System in the World by Yong Zhao. Zhao was born and educated in China. He came to the United States to attend graduate school and is currently a professor at the University of Oregon. He says that China has achieved preeminence in the world in student performance on standardized tests, but has done so at the cost of creativity, originality and individualism.

Zhao believes the United States government's obsession with test scores and international comparisons like the Programme of International Student Assessment (PISA) that show the United States students lagging behind students in East Asian countries and Finland is misplaced. In an interview with the New York Times he says

[E]xcessive focus on test scores hinders a real education, which is more about helping each and every child grow rather than forcing them to achieve high test scores. In other words, PISA and other tests measure something very different from the quality of education...

What are the possible costs to a country like China that relies on a standardized test driven educational system focused on accountability and higher test scores?

[A] narrow education experience that is centrally dictated, uniformly programmed and constantly monitored by standardized tests is unlikely to value individual talents, respect students’ interest and passion, cultivate creativity or entrepreneurial thinking, or foster the development of non cognitive capacities. But it is the diversity of talents, passion-driven creativity and entrepreneurship, and social-emotional well-being of individuals that are needed for the future economy.

And what kind of education should we be focused on in the United States?

The education we need is actually quite simply “follow the child.” We need an education that enhances individual strengths, follows children’s passions and fosters their social-emotional development. We do not need an authoritarian education that aims to fix children’s deficits according to externally prescribed standards

The danger that the American educational system faces is palpable and real. By focusing on standardized test score comparisons with countries that do not match our culture or our children we risk destroying all that is good in this country's educational system. A world class educational system is one that focuses on the academic, social, emotional and physical development of every individual child. 

As I talk to teachers, parents and students in our schools today, I hear their concern. Many are aware that something is being lost. For teachers it is often the opportunity to follow students interests when an interesting question occurs. For parents it is concern over time devoted to test preparation that could be spent on art, music or physical education. For students it is the anxiety produced by having to take tests with ill defined consequences and heightened expectations. 

When students take a standardized test, they usually face a multiple choice question with four choices. Often several choices could be correct, but students know that only one answer will be considered correct. Standardized tests do not leave room for alternatives. They embody the tyranny of the one right answer.

Educational policy makers on the national and state level have bought into standardized tests as the one right answer. They are wrong and the tyranny of that one right answer may very well come to haunt us in the future when we begin to ask, "Where has American innovation and creativity gone?"

This is not a multiple choice question. We may find that American innovation and creative thought has gone the way of the dodo bird, driven to extinction by an environment that holds test scores to be the route to improved learning.

Last week, 70% of the students at Princeton High School in New Jersey opted out of taking the Common Core aligned PARCC standardized test. Apparently, they determined that after weighing all the variables, they were better off not taking it. Now that is in the grand tradition of American innovation, creativity and independence.

Thursday, March 12, 2015

Standardized Tests: Truth and Consequences

What responsibility do standardized test advocates have to the tested?

In recent posts I have been addressing issues related to student testing on the PARCC, SBAC and DIBELS. My research for these posts has led me down many dark alleys and more than a few rabbit holes, but one piece of information I came across created a real "Eureka!" moment in my standardized test addled brain. The discovery, which was roaming fuzzily around in my mind for a long time, finally came into focus when I discovered the concept of consequential validity.

Most of you who are certified teachers probably remember something about test validity from that ed. psch. course you took as an undergraduate. Simply stated a standardized test is said to be valid (at least statistically) if it measures what it claims to measure. So, a test of reading comprehension would be required to demonstrate that it indeed measured reading comprehension and not something else, like say, the relative wealth of the people taking the test. But in 1989 a psychologist named Samuel Messick posited that tests had a higher calling to answer to than just a statistical validity. A test also needed to be valid in the way it was used and interpreted. Messick called this new take on testing consequential validity.

Consequential validity requires test makers and test givers and test interpreters to ask, "What are the risks if the tests are invalid or incorrectly interpreted?" and "Is the test worthwhile given the risks?"

The recent history of the test and punish movement in America would suggest that we are coming up very short in the consequential validity department. A recent report from Fair Test chronicles the failure of the No Child Left Behind (NCLB) law of 2002 to meet any of its stated goals. NCLB, of course, brought on yearly standardized testing in grades 3-8 with the promise of narrowing the achievement gap in America's schools. Fair Test found that NCLB has been notably unsuccessful in narrowing gaps and that in many cases (such as English Language Learners, students with disabilities) the gaps are wider than they were in 1998.

Clearly, the NCLB testing regime has failed to narrow achievement gaps, but that is not the worst news. Just what have been the consequences for children of this move to more standardized testing? According to Fair Test the consequences include widespread evidence of curriculum narrowing, extensive teaching to the test, pushing low-scorers out of school and widespread cheating scandals.

I could add a few more consequences to this list. Since these tests were used to label schools erroneously as "failing", the tests have undermined the morale of teaching staffs and demonized schools in urban areas struggling with myriad issues ranging from student poverty to lack of textbooks to crumbling infrastructure. Many elementary schools have done away with recess to cram in more test prep. New elementary schools were built without a playground, because the test trumped student active play. Less time was allotted for arts instruction, so that students could focus on tested subjects. Could future researchers point back to NCLB and find other consequences like increased obesity and a declining participation in the arts?

In 2009, the Obama administration doubled down on NCLB with their program entitled Race to the Top (RTTT). RTTT called for new tests tied to the Common Core State Standards and for using the scores from those tests, not only to rate schools and children, but also to evaluate teachers. What are the likely consequences of rating teachers based on these tests? According to the Education Policy Institute we can expect the following:

Tying teacher evaluation and sanctions to test score results can discourage teachers from wanting to work in schools with the neediest students, while the large, unpredictable variation in the results and their perceived unfairness can undermine teacher morale. Surveys have found that teacher attrition and demoralization have been associated with test-based accountability efforts, particularly in high-need schools.


So, one consequence of the new testing regime is likely to be to make it even harder for urban schools to recruit the best, brightest, most dedicated teachers. In New York where the new tests have already been instituted, State Department of Education officials predicted that as a consequence of the new test only 30% of children would be found proficient. Low and behold this prophesy came true, perhaps because those same officials were responsible for determining the "cut scores" after the test results were in. 

What were the consequences? Further humiliation of children, teachers and schools and a general outcry from concerned parents. The parental concern led federal Secretary of Education to declare that these "white suburban moms" were surprised to find their kids were not as smart as they thought. No officials seemed to consider that the tests were not as smart as they might be.

As the new tests spread across the country we can predict that students’ scores will fall. Testing advocates will cheer and say the new tougher standards have been validated and they will use the scores to push for more school choice, more charter schools, more teacher union bashing and more tests.

These are the consequences we can look forward to as the push for more standardized testing continues. These tests have already proven that they have no validity as a tool for narrowing achievement gaps or for improving the lives of the vast majority of the 25% of American children living in poverty. 

When we look at the consequences of standardized testing, low student and teacher moral, narrowed curriculum, cheating scandals, test prep parading as learning, it is also clear that this level of high stakes standardized testing in schools fails the test of consequential validity.

To return to Messick again I would ask, "Are these tests worth the risks?" The clear answer is absolutely not.

As Lily Tomlin's wise little girl character, Edith Ann would say, "And that's the truth!"






Monday, March 9, 2015

Dump DIBELS

DIBELS (Dynamic Indicators of Basic Early Literacy Skills) is an early reading assessment measure that is widely used in schools.  According to their web site DIBELS

Are a set of procedures and measures for assessing the acquisition of early literacy skills from kindergarten through sixth grade. They are designed to be short (one minute) fluency measures used to regularly monitor the development of early literacy and early reading skills.

In practice DIBELS is a set of one-minute tests of a student’s ability to name letters, segment phonemes, identify initial sounds in words, read nonsense words, read fluently and retell. The creators of DIBELS argue that student ability to perform these tasks in strictly timed situations predicts their future reading success or struggles.

DIBELS came to be widely used because it was closely tied to the Reading First and NCLB initiatives of the last 15 years. DIBELS fit nicely into the Reading First push for “scientifically researched” practices. The creators of DIBELS, a group of researchers out of the University of Oregon, were able to generate lots of experimental data showing DIBELS was a reliable instrument. Many school districts were forced to adopt DIBELS assessments in order to qualify for government funding.

But from the start DIBELS has generated controversy. A special education commissioner for the U. S. Department of Education named Ed Kame ‘enui, resigned after a Congressional investigation found that he had “gained significant financial benefit” by promoting DIBELS from his government position. Two other Department of Education employees were also implicated in the investigation. Perhaps more importantly, many, many highly respected literacy researchers have found that the impact of DIBELS has moved instruction away from what we know works for children.

P. David Pearson, one of the leading literacy experts in the country and a man known to avoid hyperbole and for taking a centrist view on issues related to literacy instruction had this to say about DIBELS:

I have decided to join that group of scholars and teachers and parents who are convinced that DIBELS is the worst thing to happen to the teaching of reading since the development of flash cards (Goodman, K. et. al. (2007) The Truth About Dibels).

In the same volume, literacy researcher Sandra Wilde found that while the DIBELS claims “to strongly predict whether individual children are likely to fail to learn to read. It just doesn’t.”

Also in The Truth About DIBELS, University of Arizona professor emeritus and long-time reading theorist Kenneth Goodman posits that

DIBELS is based upon a flawed view of the nature of the reading process and, because of this fundamental flaw, provides all who use it with a misrepresentation of reading development. It digs too deeply into the infrastructure of reading skill and process and comes up with a lot of bits and pieces but not the orchestrated whole of reading as a skilled human process.

In a technical report out of the Literacy Achievement Research Center, Pressley, et. al (2005) found that DIBELS

mis-predicts reading performance on other assessments much of the time, and at best is a measure of who reads quickly without regard to whether the reader comprehends what is read.”

What is it that makes DIBELS the “worst thing to happen to reading instruction since flash cards?” As Pearson sees it, the use of DIBELS in the schools has an undue influence on the curriculum, driving reading instruction to a focus on the little bits of reading and away from a focus on the whole of literacy instruction. Students are held accountable to the indicators of reading progress rather than actual reading progress and teachers are forced to instruct in ways that violate well-documented theories of development and broader curricular goals. In other words, DIBELS becomes the driver of the curriculum and the curriculum is narrowed in unproductive ways as a result.

Ultimately, Pearson says, DIBELS fails the test of consequential validity. In other words, the widespread employment of DIBELS has had dire consequences on the actual teaching of reading. Teachers have been forced through this test to focus on a narrow definition of the “stuff” of learning to read, rather than on the broader context of what reading actually is – the ability to make sense of squiggles on a page made by an author.  The consequences of DIBELS makes it unworthy to use as an assessment tool.

If DIBELS has become a scourge in your school or school district, I suggest you gather up the research cited here and question those who are foisting this highly flawed, and ultimately counterproductive, assessment practice on your students and fellow teachers.





Thursday, March 5, 2015

PARCC Math Test Readability

Two weeks ago I ventured into the world of PARCC testing with several posts on the readability of the reading comprehension passages of the new Common Core aligned PARCC tests, which are being administered right now in many states. You can find those posts here, here and hereSome readers expressed concern about the readability of the PARCC math exams and asked me to take a look at it.

Background: Readability on a math exam matters. While we might assume that a math exam assesses a students ability to perform various mathematical computations, all of the math questions on the PARCC required some literacy skills as well. A study by Abedi and Lord published in Applied Measurement in Education found that linguistic complexity of math word problems can have a significant impact on the test scores of inexperienced problem solvers, English Language Learners and students with disabilities. The question that must be asked is simply, "Does the PARCC measure computation skills or a combination of literacy skills and computation skills?" And we might further ask, "Will students with on grade level reading skills be disadvantaged by the reading required on the math exam?"

Method: I will not rehash all my reservations about readability measures here. You can look at the posts on the reading comprehension part of the PARCC if you would like a fuller explanation. Suffice it to say here that readability measures can only give us an approximation of the difficulty of any one text on any one reader, so all results need to be taken with a grain of salt.

For the purposes of this post, I looked at the PARCC Mathematics Practice Tests. In order to get a sample of 300 hundred words to do a readability measure, I sampled word problems from the beginning, middle, and end of the test. I hoped this would give me a sense of the readability of the word problems. I ran the passages for each grade level 3-8 through several readability formulas: Lexile, Flesch-Kincaid (FK), Fry and Flesch Reading Ease (FRE) scale. Lexile is the preferred readability formula of the Common Core architects and are expressed as grade ranges. These Lexile ranges were adjusted upward as a part of the Common Core's push for "college and career readiness."The other scales are commonly used readability measures. The Flesch Reading Ease Scale provides a number rating based on 90-100 being easy reading for 11-year-olds and 60-70 being easy for most 12-13 year-olds.

Findings: 

  • 3rd Grade
    • Lexile        830 (3rd grade range is 520 - 820)
    • FK              4.6 grade level
    • Fry             4 grade level
    • FRE            81.6
  • 4th Grade
    • Lexile        890  (4th Grade range is 740 - 940)
    • FK              5.1
    • Fry             5.3
    • FRE           80.3
  • 5th Grade   
    • Lexile         820 (5th Grade range is 830 - 1010)
    • FK              5.4
    • Fry             5.8
    • FRE           77.8
  • 6th Grade
    • Lexile        1000 (6th grade range is 925 - 1070)
    • FK              4.6
    • Fry              5
    • FRE            77.8
  • 7th Grade
    • Lexile         810 (7th grade range is 970-1120)
    • FK              4.8
    • Fry             5.2
    • FRE           80.5
  • 8th Grade
    • Lexile        1000 (8th grade range is1010-1185)
    • FK              8.1
    • Fry             8.2
    • FRE           60.9
Discussion: 
  1. On grade level readers in grades 3 and 4 are going to find the reading required on the PARCC math tests to be very challenging. This will surely impact their scores on the test.
  2. On grade level readers in grades 5-8 should be able to handle the reading demands of the test.
  3. Below grade level readers, English Language Learners, and students with disabilities related to language processing will find the reading required for these tests very challenging. This will impact their scores on the PARCC math tests.
Conclusions: Because readability formulas are volatile and inexact, we must draw conclusions carefully. I have not examined  the qualitative aspects of these texts (how readable do they appear in light of the age of the children reading them); however, some tentative conclusions can be drawn from this initial look.
  1. Teachers, administrators and parents must treat the results of the PARCC math tests with extreme caution.  The math test scores will surely be influenced by the ability of the students to read the material. Separating out what is a computational weakness and what is a reading weakness will be left to the observation and intervention of the classroom teacher.
  2. Questions must be asked about the validity of the test scores in grades 3 and 4 based on the challenging level of the reading. In third grade, even by the revised and Common Core championed Lexile measures, the reading is very challenging and perhaps inappropriate for the age and grade level of the children.
  3. These tests are clearly too imprecise to be used for any kind of high stakes decisions, including student placement, teacher evaluation or school effectiveness. Any attempts to use these tests for such purposes will be fraught with error and would have potentially damaging results for children, teachers, parents and schools.


Wednesday, March 4, 2015

Grammar Police, Winston Churchill and Me

Instead of grammar rules, let's focus on grammar tools

Thanks to my friend and fellow blogger, Dave Raudenbush,  for pointing out that today is National Grammar Day. I went to the web site for National Grammar Day and found it apparently dedicated, not so much to good grammar, as to good old American hucksterism. The site is there mostly to sell the sponsors' books and t-shirts. See what I did there, grammar fans? "Sponsors'" with an apostrophe after the "s" indicates more than one and that is what I mean to say. The site did point me to some handy grammar tips from "Grammar Girl", so you might want to check it out.

My favorite grammar story comes from Winston Churchill. Besides being the Prime Minister of England and one of the great political leaders of the twentieth century, Churchill was a notable writer. His histories of World War II are still considered must reading for historians. The story goes that once some cheeky editor suggested changes to a Churchill manuscript, because it contained, horror of horrors, a preposition at the end of a sentence. Churchill responded to this red pen wielding upstart with the following: "Your suggestion that I edit this sentence is a bit of impertinence up with which I shall not put."

I have since learned that the story may be apocryphal, but true or not, the tale illustrates an important point. Much of what we are taught as the rules of grammar are simply not rules. There is no reason to avoid ending a sentence with a preposition, unless adding the preposition is redundant as in "Where are you at?", which is incorrect because "Where are you?" carries the same meaning. However, as the Churchill story illustrates, avoiding a preposition at the end of a sentence can lead to awkward construction. Prepositions at the end of a sentence is something we should all be able to put up with:).

Those eagle-eyed grammarians out there may have noticed that I used "however" to begin a sentence in the last paragraph. My seventh grade English teacher, Mrs. McGarry, would be appalled. But this is another of those grammar rules we all have been taught that simply are not true. Beginning a sentence with a coordinating conjunction is perfectly grammatical. Here is what the Chicago Style Manual has to say on the subject.

There is a widespread belief—one with no historical or grammatical foundation—that it is an error to begin a sentence with a conjunction such as and, but or so. In fact, a substantial percentage (often as many as 10 percent) of the sentences in first-rate writing begin with conjunctions. It has been so for centuries, and even the most conservative grammarians have followed this practice.

Grammarians speculate that long ago teachers noticed that students tended to overuse "and" or "but" at the beginning of sentences and so they banned the practice. Apparently, teachers have repeated this false "rule" over the following years, decades and centuries.

There is one grammar error that I keep hearing and which drives me batty. I am referring to the incorrect use of the word "myself" as a substitute for "me." For example, "The boss wants to meet with John and myself." The correct usage is, of course, "The boss wants to meet with John and me." "Myself" is a reflexive pronoun that is used only in conjunction with the pronoun "I." So it is correct to say, "I did it all by myself",  or "I myself completed the task. "Myself" has become so ubiquitous as a substitute for "me", that when I use "me" correctly in a sentence, I get the distinct impression that people think I've gotten it wrong myself. Please, don't blame me.

In general, as teachers, I think we should avoid teaching grammar as a set of rules and start to teach it as a set of tools for the writer. Writers manipulate grammar for their own purposes all the time. Here is a paragraph from the Cynthia Rylant story, "My Grandmother's Hair."

We talked of many things as I combed her fine hair. Our talk was quiet, and it had to do with those things we both knew about: cats, baking-powder biscuits, Sunday school class.  Mrs. Epperly's big bull. Cherry picking. The striped red dress Aunt Violet sent me.

Wow, three sentence fragments in a row. Why does Rylant do this? I would speculate that Rylant liked the rhythm created here. It helps to create the tone of nostalgia and reminiscence that the story carries forward. My seventh grade teacher, Mrs. McGarry, would have bled all over this paragraph with her red pen had I turned it in, but as we see, great writers manipulate grammar to their purposes.

I know what you are thinking, "It's OK for Rylant to break the rules, because she knows what the rules are, but kids need to learn the rules first." I am not so sure. What better way to learn the difference between a complete sentence and a fragment than to actually use fragments and complete sentences in our writing and then talk about them as choices a writer makes?

For a wonderful book on teaching grammar as a tool for writers, I recommend Image Grammar by Harry Noden. The book is out of print now, but still available at used book stores. There is also an online resource companion to the book that you can find here.

If we can engage our student writers in conversations about grammatical choices, rather than trying to inculcate them with grammar rules, I think we have a better chance of creating writers who learn the rules, and learn to bend them, along the way.













Sunday, March 1, 2015

Standardized Tests: Silly Incentives or Serious Instruction?

In today's guest post, Cindy Mershon, reading specialist and literacy consultant, asks us to keep the child at the center of our thinking about standardized tests. What is our responsibiity as teachers when kids face the high stakes testing? Cindy's answer includes considering standardized testing as a genre to be taught.


by Cindy Mershon

“Educators are faced with a dilemma: our knowledge of reading processes and reading instruction is at odds with our assessment instruments.  As a result, we run the risk of misinterpreting assessment data.  If tests do not assess what we define as skilled reading, then they cannot adequately determine progress toward that goal.  Thus, if we equate high scores on existing tests with good reading we may be led to a false sense of security.  Conversely, low scores may lead us to believe that students are not reading well when, by a more valid set of criteria, they are.  Furthermore, tests have a powerful impact on curriculum and instruction; they influence classroom practice.  In short, tests may be insensitive to growth in the abilities we most want to foster and may be misguiding instruction.”

Valencia, S.W., Pearson, P.D., Peters, C.W., Wixson, K.K. (1989).   Theory and practice in statewide reading assessment: Closing the gap.  Educational Leadership, April, pp. 57-62.

It is my fault that I chose to read the local paper on Tuesday morning.  It is my fault that, when I spied this headline – “Schools Cancel Test Incentives” – I remained in my chair in the kitchen, cocker spaniels in attendance, read the headline, felt the migraine button in my head switch to “on,”, and kept reading.

Seems a local school district had planned “to offer incentives, including $5 gift cards, intended to boost student participation and performance on the standardized PARCC exams.”  This district had used such incentives in the past, but had recently decided to cancel their plan due to increased “sensitivity” over the heavily debated, upcoming PARCC exams.  Reading on, I learned that under said plan, students would have been able to earn points for completing tasks before and after the exams, tasks such as arriving at school on time each morning of the test; preparing for the test by eating a healthy breakfast and getting a good night’s sleep; exerting effort during the test; attending school every day of the testing; and thoroughly checking work after finishing each day’s portion of the test.  At the end of the week, the five students earning the most points in each class would have received gift cards from their teacher.

I swear on the head of favorite dog Phoebe I am not making this up.
Good news is the shocked silence and throbbing migraine of Tuesday have disappeared and I am now receiving stimuli from every book and article I have read on standardized testing.  I am now remembering nearly 30 years of studying standardized tests and helping fourth and fifth grade students understand and successfully manipulate the standardized tests that are, for now, a part of their school lives.  And I am angry.  Again.  Still.

I am not a fan of standardized tests.  I understand their hoped-for purpose but see clearly how they – and their derived scores – are easy to misread and misuse.  As a human being and a reading specialist, I, too, long for a quick, easy system of assessment that allows me to plan instruction and help all students to be successful in school - I became a reading specialist because I want all students to have a chance at literacy.  But, I have learned that human behavior, and reading and writing acquisition in particular, is simply too complex to be measured in a single, paper-and-pencil assessment given on four days for 45 to 60 minutes at a time.  While these assessments – if they are valid and reliable assessments – can add puzzle pieces to the complete picture we hope to create of a reader or writer, they are simply too narrow a measure of reading and writing to provide a comprehensive and accurate picture of students as readers and writers to make judgments about instruction, placement, or the effectiveness of teachers, schools, and districts.

And – and this is a big and - if standardized tests are not valid (accurately measure what they say they measure) and reliable (produce stable and consistent results each time they are given), they cannot be used to draw conclusions about any of these issues, and so should not be inflicted on any child.  Many, many years ago, an article in The Reading Teacher drew the distinction between a “sow’s ear assessment” and “silk purse data,” and talked about why the first could not possibly produce the second.  Yet many people greet the data that results from these less-than-wonderful tests as if Moses has sent it down from the mountain.  As “correct” and “accurate.”  Test results are published in the newspaper, are used to place children, and are assumed to be “true.”  Why?  Just how, exactly, do you get good data from a bad test?  What possible reason could we have for using bad data to make important decisions about teaching and learning or the quality of schools and school districts? 

Anya Kamenetz, in her new book, The Test: Why Our Schools are Obsessed With Standardized Testing – But You Don’t Have to Be (reviewed by Dana Goldstein in The New York Times of 8 February), raises even more questions regarding reliability and validity when she suggests standardized tests are a “20th-century technology in a 21st-century world,” that they “conceptualize proficiency as a fixed quantity in a world where what’s important is your capacity to learn and grow.” 

What angers me most about the idea of providing incentives to students for preparation and performance on standardized tests is the lack of respect for children that is clearly communicated by this “game.”  Because standardized tests are, for the foreseeable future, a part of students’ school lives, is it not more important to be honest and straightforward with them about what these tests are, why they are given, and how they work?  Don’t students need to be included in the conversation that leads to successful experiences with standardized tests rather than offered demeaning and artificial prizes? 

I believe students need to know they are likely to take these tests once each year, will take them as a part of their college admission process, and will take them yet again if they decide to go to graduate school, medical school, or law school.  They need to recognize that their scores on these tests will be recorded and shared with teachers and parents, and that these scores will play a part in painting a picture of them as learners and assessing their success as students.  No student equals his or her standardized test score, but those scores are kept in student folders and are typically part of conversations when that student’s school performance is discussed, for good or bad.  Students need to understand, too, that standardized tests have limitations, and that interested, responsible educators continue to work to see how (if?) these tests can play a meaningful role in assessing student performance.

Students need to know most children in the United States take similar standardized tests, and that standardized tests, especially those in reading and writing, are very similar in format.  Students need to know that good classroom instruction in reading and writing is always the best preparation for doing well on standardized tests of reading and writing, but being successful on something you do only once each year can require additional and deliberate study.  Preparing students to do well on standardized tests can be accomplished with perhaps 10 reading and writing periods devoted to deliberate instruction of test-taking skills, or with short lessons throughout the school year, but does not need to be the “teaching to the test” curricula discussed in Kamenetz’s book (some schools use up to 25% of their school year to prepare students for the tests, abandoning teaching of their regular curricula).  Just as high school students and college undergraduates, who can afford it, attend SAT and GRE preparation courses on weekends or for an hour each week for several weeks, younger students need explicit teaching in understanding the format and parameters of standardized tests without sacrificing their daily school studies and curriculum. 

What makes most sense is to teach students that reading and writing on standardized tests is simply another genre, or type, of reading and writing that has its own attributes.  Understanding these characteristics and how they work will prepare students for the work they are asked to do when tasking these tests.  Students need to understand that the genre of standardized test reading and writing is significantly different than the daily experience of reading and writing instruction. Here are some examples:

·         On standardized tests, students work independently for a 45-60 minute prescribed period of time for four or five days.  In reading and writing workshop, students are accustomed to working in concert with their teacher and classmates; units cover an extended period of time, perhaps four to six weeks, and a series of units is studied throughout the entire school year.
·         Standardized tests in reading consist of short passages followed by several multiple choice and one or two short constructed-response questions with stress on a single, correct answer. Students in reading workshop select their own full-length books to read, have an opportunity to talk about their reading at length with the teacher and classmate in conferences and/or book clubs, may write responses to their reading several times each week, and are offered direct instruction in comprehension strategies each day.  (This conversation/strategy instruction can also take place during classroom read-alouds.)  Emphasis is placed on constructing meaning supported by evidence from the text and the possibility of varying points of view from varying readers: multiple interpretations are possible within the parameters of the text.  Students’ classroom reading is continually scaffolded in a variety of ways, while their reading on a standardized test is necessarily done in isolation. 
·         Standardized tests of writing give students prompts for writing and limit writing time to approximately 45 minutes.   Students in writing workshop, like reading workshop, often study a particular genre for four weeks or longer and choose their own topics.  They confer regularly with both teacher and fellow students and participate in daily direct instruction that supports their knowledge of writing strategies, crafting techniques, and the conventions of writing.  Again, writing on a standardized test is an independent task.
·         Completed work on standardized tests will not be available for examination and discussion by students and teachers working together to assess what was done well and what presented challenges that need to be explored in future work.  Work on standardized tests is sent away – to someone from “out of town”  to evaluate - and becomes lost to teacher and student for months until scores are returned.  When the test data do arrive, the scores are presented as derived numbers that can be difficult to interpret – and easy to misinterpret – and don’t always help teachers know how to help students improve as readers and writers.  The only reliable conclusion we can draw from standardized test data is how well students take standardized tests.

Test items in and of themselves present a challenge to students, also.  Multiple choice test answers contain “distractors,” or answers that are purposely constructed to distract students’ attention from the correct answer.  Being fair, this helps to guard against too many lucky-guess right answers.  Distractors include words or phrases pulled directly from the text but placed in the context of wrong answers, positives expressed as negatives (and vice versa), etc.  Even good readers are sometimes drawn to language that is familiar to them from the passage they have just read if they do not read the entire answer carefully and realize it is not a good choice.  And, test makers frequently put correct answers in position “c” or “d” rather than “a” or “b,” knowing that test takers often choose the first answer they read that looks correct, or almost correct.  One of the most useful strategies we can offer young test takers is to “read all four multiple choice answers before choosing the one you believe is the best answer.  The correct answer may be placed in any of the four ‘a, b c, d’ positions, but test takers are counting on you to be anxious and in a hurry and choose the first one you read that seems right – this is a timed test and they know you want to keep moving!  Read each and every answer before you make a choice!”

Another important strategy that helps students manipulate multiple choice questions successfully is teaching them about the kinds of questions they will be asked to answer.  If students are not learning about Question-Answer Relationships (Raphael) during regular comprehension instruction in reading workshop (and they should be), they need to learn about QAR’s as part of their test preparation.  Raphael suggests students have difficulty answering questions about their reading because they cannot recognize the difference between literal and inferential questions, and therefore do not know how to return to a text to locate the information they need to construct an answer.  On standardized tests, as in independent reading, if students know a question is a literal question or an inferential question, they can learn how to search the text for an answer, or how to combine information supplied by the text with their prior knowledge to construct an answer.  When we say to students “Read the question carefully and think about your answer,” what we should be saying is “Let me show the different kinds of questions you may encounter and how you might go about figuring out how to find and put together an answer from the text and from what you already know.  Let me tell you about question-answer relationships.”

Directions can also be confusing to young test-takers.  Standardized tests of reading frequently ask students to read “a passage,” when in classrooms we talk about reading “books” or “texts.” Many standardized writing tests ask students to write “compositions;” students in writing workshop are used to specific language that asks them to write “personal narratives,” “persuasive essays,” “feature articles,” etc.  When young students are anxiously navigating timed tests they take only once each year, unfamiliar vocabulary can confuse them, raise their level of concern, and possibly interfere with their ability to perform at their best.  Talking to them about new and different words they might encounter can lower their stress and prepare them for what might appear on the test.

The way in which our instruction is presented during these test-taking skills lessons is critical.  This is not the time for worksheets done in isolation.  This is the time for think-alouds, with the teacher and students talking out loud together, learning from each other, sharing their thinking about test items, test answers, rubrics, and scored writing prompts.  Research tells us the primary difference between good test takers and poor test takers, when taking a multiple choice exam, is that the good test takers can identify not only the right answer but know why the other three answers are wrong.
  
Reviewing individual sample test items, talking about which answers are right but also why other answers are not, identifying distractors and how they work – this thinking work can help students learn how standardized tests are constructed and how successful test-takers approach testing.  Familiarizing themselves with the rubrics that will be used to evaluate their writing and examining released samples of scored writing shows students exactly what other writers did to earn particular scores on the test.  This kind of practice and rehearsal lowers students’ test anxiety while it increases their familiarity with the items they will be asked to manipulate and produce (“I’ve done/seen this before!”).

These plans for teaching test taking skills – or test-wiseness – invite students to be a part of the conversation, respect students as stakes holders in the standardized test world, and offer students the best chance for successful performance on standardized tests.  While this preparation does not guarantee higher scores on standardized tests, it does provide us with some assurance that students are able to show us what they truly do know and are not hampered in revealing their understanding by unfamiliar formats. Gift card incentives for good preparation and performance on standardized tests skips over this important information and provides students with no strategies for managing standardized tests, be it the first or sixth time they encounter them. 

The idea of the incentives does, however, make me think  about Barbara Kingsolver’s, “Somebody’s Baby,” included in her 1995 collection of essays, High Tide in Tucson: Essays From Now or Never.  The thrust of this essay (I figured this out without answering a single multiple choice question) is that people in the United States do not like kids, and that we live in “an increasingly antichild climate.”

Extreme, I know.  But every time I come up against issues in education that seem to fly in the face of common sense as well as what research tells us about how children learn, develop, and live, I drift back to this essay.  How much of what is happening in education today might be traced back to the thesis of this essay?  Does our country, our culture, disrespect and dislike children enough to make decisions about testing, schools, and funding that shortchange students instead of supporting them?  

If our children are important to us, why not include them, in this case, in conversations and preparation for standardized test in a way that respects their role in the task?  They are, after all, the people who will be sitting down to actually take the tests.  Yes, they are young, short, and na├»ve, but they are also intelligent, concerned, and contributing members of our educational community. They deserve to know what they are being asked to do and why,  to understand what is at stake when they take part in this task, and to be prepared, in the most productive, meaningful way available to them.

If we care about our children, why would be offer them anything less…..and let’s be clear – gift cards are less.  Test instruction is very different than test incentives, and we need to ask ourselves, even as we work to provide better standardized tests and data interpretation, what do we believe about how we best handle our students’ experiences when taking standardized tests?  Maybe we need to read, and reread if necessary, the closing line of Kingsolver’s essay:  “Be careful what you give children, for sooner or later you are sure to get it back.”

Kamenetz, A.  (2015).  The test: Why our schools are obsessed with standardized testing – but you don’t have to be.  PublicAffairs.
Kingsolver, B.  (1995).  High tide in Tucson: Essays from now or never.  HarperCollins Publishers.
Raphael, T.  (1986).  Teaching question-answer relationships, revisited.  The Reading Teacher, 39, 516-

Wednesday, February 25, 2015

From Text Complexity to Considerate Text

The Common Core State Standards call for kids to read lots of complex nonfiction text so they can be "college and career ready." As Appendix A of the English Language Arts section of the Common Core rather breathlessly puts it,

[T]he clear, alarming picture that emerges from the evidence... is that while the reading demands of college, workforce training programs, and citizenship have held steady or risen over the past fifty years or so, K–12 texts have, if anything, become less demanding. This finding is the impetus behind the Standards’ strong emphasis on increasing text complexity as a key requirement in reading.

As I have discussed in previous posts here, here and here, this Common Core call for employing more complex texts has led to much confusion and inappropriate instruction. The statement is also demonstrably wrong when it comes to readability on the K-3 level.

There is, however, another issue related to text complexity that I have yet to see anyone explore in the Common Core context. Text complexity is not an unqualified good. Indeed, it may be more reflective of the writer than of the reader. Just what is the responsibility of the author to the reader when writing any text?

Any act of reading is by definition an effort by a reader to comprehend, but it is also an attempt by a writer to be understood. There exists, in what Louise Rosenblatt has called the reading "transaction", an implicit contract between writer and reader. The writer promises to make every effort to be understood and the reader promises to make every effort to understand.  So, if a reader's comprehension breaks down when faced with a complex text, is that a failing of the reader or a failing of the writer or a little bit of both?

Nathaniel Hawthorne said, "Easy reading is damned hard writing." Shouldn't a reader expect the writer to put in the effort to write clearly, so that complexity is primarily a matter of the concepts discussed and not a product of the limitations of the writer? 

What makes a text complex? 

Zhihui Fang and Barbara G. Pace (2013) have identified five factors that make a text complex.
  • Vocabulary (high frequency of content specific words)
  • Cohesion (lack of skillful use of cohesive elements can make text complex)
  • Grammatical metaphors (discussed below)
  • Lexical density (packing lots of content words into individual clauses)
  • Grammatical intricacy (lots of long sentences strung together with multiple clauses through coordination/subordination)
Grammatical metaphors are linguistic choices that a writer makes to communicate meaning in an atypical way. Instead of saying "the businesses failed and slowed down", the writer chooses to say "business failures and slowdowns." These atypical structures may make the text harder for a reader to comprehend.

What I have tried to demonstrate here is that most of what makes a nonfiction text complex is rooted in choices that the author makes that may present special challenges to the reader. I believe that the writer, however, has an obligation to consider the reader in making these decisions.

Musing on this issue took me back to some reading I had done long ago in graduate school about a concept called considerate text.

What is considerate text?

Considerate text was first explored by literacy researchers Bonnie Armbruster and Thomas Anderson (1985).  Essentially, Armbruster and Anderson posit that authors and editors can do several things in presenting information to make it easier for the reader to understand. They suggest that the following things make writing considerate:
  • Coherent structure (discussed below)
  • Introductory paragraph (sets up expectations for the reader)
  • Headings and sub-headings (guides reader's thinking)
  • Vocabulary defined in context (assists comprehension)
  • Clear use of graphic elements like tables, charts and graphic organizers (aids in developing understanding)
Coherence needs a bit of explanation. Armbruster and Anderson identify two types of coherence. First , there is global coherence, which describes the overall organizational structures of the text. Regular discernible structures, where the main idea and supporting details are easily identified, make for considerate text. Local coherence allows the reader to integrate ideas within and between sentences. The skillful use of conjunctions, transition words and clear pronoun referents make a text locally coherent. Reading problems may arise when connections between sentences or between paragraphs are not clear.

As you can see text complexity and considerate text have many intersections and at the point of those intersections stands the writer. To what extent is the reader to be held accountable for the writer's limitations?

Implications
  • Text book authors and editors have a responsibility to produce text that is considerate of the reader. This is not dumbing down readability as Appendix A of the Common Core suggests, but it is practicing skilled writing and targeting that writing to the correct audience.
  • Teachers and curriculum directors need to choose text books and supporting readings that are appropriately considerate of the target readers. Reading material can be both considerate and appropriately informative.
  • At times, of course, students will need to read complex text, because not all writers are as skilled or considerate as others. Teachers need to learn to recognize those elements of a text that make the text complex and plan activities that will help students deal with the complexity. Such activities would include preteaching vocabulary, paraphrasing grammatical metaphors and analyzing grammatically intricate sentences to unpack the meaning.      
Forcing students to read more and more complex text under the pretext of college readiness is a mistaken idea. The best preparation for successful reading in college is lots of successful reading experiences in elementary, middle and high school and lots of good instruction in making meaning from a wide variety of texts. In the meantime, it might be a good idea to ask those who write text books for college students to do the hard work necessary to write considerate text.   









Sunday, February 22, 2015

Readability of Sample SBAC Passages

In three earlier posts, I took at look at the readability of sample passages for the PARCC assessments which are being used to measure student progress on the Common Core State Standards (CCSS) in some states. You can find those posts here, here and here. As I stated in those posts the concept of readability is complicated and includes quantitative measures like readability formulas, task considerations and qualitative considerations including assessing how the text will match up with the reader.

In this post, I look at the same measures as they relate to the Smarter Balanced Assessment Consortium (SBAC) tests that are being used in other states. I looked at one reading passage on each grade level of the SBAC from grades 3 through 8. I found significant differences in the readability of these passages from what I found in the PARCC tests.

First, for the quantitative measure of the SBAC passages. I used several different readability formulas. Both the SBAC and PARCC tests use Lexile measures to determine readability. I added other commonly used measures of readability as a check against the Lexile levels. As I cautioned in previous posts, quantitative measures of readability are often imprecise, so I used several measures to see if I could get some sort of consensus on the passages.

In this table the Lexile score and the range considered appropriate for grade 3 is provided. Flesch-Kincaid and Fry measures are stated in terms of appropriate grade level. The Flesch Reading Ease score attempts to state the relative ease of reading a passage. A score of 90-100 should be relatively easy to read for an average 11-year-old. A score of 60-70 should be easily understood by a 13-year old.

Quantitative Readability

3rd Grade Passage - A Few New Neighbors

  • Lexile Level -                                                      510 ((3rd Grade range is 520 - 820)
  • Flesch-Kincaid Readability Measure (FK)      1.8 Grade Level
  • Fry Readability Graph (Fry)                             2.5
  • Raygor Readability Graph (RR)                       2.5
  • Flesch Reading Ease (FRE)                               94.9
Summary - This passage should be relatively easy to read for an average third grader.

4th Grade Passage - Coyote Tries to Steal Honey
  • Lexile       900 (4th Grade range is 740 - 940)
  • FK            4.9
  • Fry           5.2
  • RR           4.8
  • FRE         93.2
Summary - The consensus of the measures indicate that this passage falls in the upper range of readability for a fourth grader. A challenging, but not overly challenging passage by these measures.

5th Grade Passage - A Cure for Carlotta
  • Lexile      660 (5th grade range is 830 - 1010)
  • FK           5.8
  • Fry          6.5
  • RR          4.5
  • FRE        76.8
Summary - The Lexile score seems out of step with the other measures on this passage. I will look more closely at the passage below.

6th Grade Passage - Fishy Weather Conditions
  • Lexile      1040 (6th grade range is 925 - 1070)
  • FK            7.5
  • Fry            8.1
  • RR            4.5
  • FRE          70
Summary - The Raygor measure is out of step with all other measures, which provide a consensus that this is a challenging text for 6th graders. Again we will look at qualitative aspects of the passage below.)

7th Grade Passage - Life on the Food Chain
  • Lexile      900 (7th grade range is 970 - 1110)
  • FK            6.9
  • Fry            7.1
  • RR            4.5
  • FRE          68.3
Summary - Once again the Raygor measure is out of step with the others. The consensus is that this passage should be very readable for the average 7th grader.

8th Grade Passage - Ansel Adams, Painter with Light
  • Lexile     1090 (8th grade range is 1010 - 1185)
  • FK           8.3
  • Fry          8.8
  • RR          5.3
  • FRE        65.8
Summary - Once again the Raygor is anomalous, but the consensus here would be that the passage is appropriately challenging for average 8th grade readers.

Task Analysis

The task of a reader taking a standardized test is, of course, to answer questions. I looked at all the questions attached to these passages to determine what tasks were being required of students. For my analysis, I used the question categorization scheme developed by Dr. Taffy Raphael, Question Answer Relationships (QARs). QARs divide questions by the type of work the reader must do to find the answer to a question. Questions are categorized as follows.
  • Right There; These are literal level questions whose answers can be pointed to directly in the text
  • Think and Search: These are comprehension level questions like main idea questions that require the reader to put together an answer from pieces of information throughout the reading.
  • Author and You: These are inferential questions, requiring the reader to use text evidence and his/her own background knowledge to answer the question.
  • On Your Own: These are questions that are unrelated to the reading of the text. These types of questions are rarely seen on standardized tests.
I looked at 46 questions attached to the passages described above. Here is the breakdown as described by QARs.
  • Right There:                   1
  • Thank and Search:         17
  • Author and You             28
  • On Your Own                0
As would be expected from a test tied to the CCSS, a number of questions asked students to cite evidence for their answers. In the PARCC test this accounted for almost 50% of the questions. On the SBAC this percentage was closer to 30%. Every grade level was asked a question requiring determining the meaning of a word from context. This is also aligned with skills emphasized n the CCSS. Every passage also included questions aimed at the understanding of key ideas in the text and at an overall understanding of the text. While some questions were aimed at text analysis, the balance on the SBAC appeared to me to be more in keeping with a focus on a general comprehension of the text than were the PARCC samples I looked at, which were more focused on passage analysis.

Qualitative Analysis

Since quantitative measures of reading difficulty are notably unreliable, a third factor we must look at is qualitative, i.e. how we think the text will match up with the readers who will be reading it. Ideally a test passage will not disadvantage students because of different background knowledge or culture. In reality we know that standardized tests have difficulty doing this because they are targeted at such a broad audience. Here I look at each of these passages to determine as best I can how they will match with the target readers.

3rd Grade Passage - A Few New Neighbors

A straightforward and pleasant story that follows regular narrative structure. Vocabulary appears very appropriate for a third grade reader.

4th Grade Passage - Coyote Tries to Steal Honey

This passage is a folk tale that also follows a regular narrative structure. The trickster tale should be familiar to most fourth grade readers because so many folk tales are focused on a trickster, whether it is a rabbit, a raven or a coyote. Vocabulary in this tale appears to be well within the wheelhouse of most fourth grade readers. The use of figurative language may cause some readers minor issues in comprehension, but this passage appears to be appropriate for a fourth grade reader.

5th Grade Passage - A Cure for Carlotta

This story of a young boy's immigration to America on a ship from Italy is typical of many other stories aimed at elementary age students studying the story of immigration. The structure is a straightforward narrative with more descriptive detail than the passages for the younger students. Vocabulary load does not appear overwhelming for most fifth grade readers.

6th Grade Passage - Fishy Weather Conditions

This nonfiction passage is informative and entertaining. It explains the unusual phenomenon of fish falling from the sky in some areas of the world. The passage has a fairly high readability level for a sixth grade passage, likely due in part to the introduction of unfamialr vocabulary. Words like "dissipate" and "phenomenon" and "adaptation" might cause readers some challenges, but "dissipate" is directly defined in the passage and skilled readers can probably deduce the other meanings from context. Some figurative language like "connect the dots" may challenge some students. All in all a challenging passage that will cause some grade 6 readers difficulty.

7th Grade Passage - Life on the Food Chain

This nonfiction passage provides a straightforward explanation of the food chain. The text is organized in such a way that it should be easy for 7th grade readers to follow. The vocabulary load is heavy, but almost all terms are clearly explained right in the text. Sentence structure is not overly complex. A fair passage to assess 7th grade readers.

8th Grade Passage - Ansel Adams, Painter with Light

This biographical piece is written in a narrative format, telling the story of how Ansel Adams came to be a great photographer who chronicled the beauty of the American West. The passage contains a good deal of fairly sophisticated sentence structures that may cause some readers difficulty, but in general the account is highly readable. There are few concerns with the level of vocabulary for an eighth grade reader. I think the passage is appropriate for an 8th grade assessment.

Conclusions - 
  1. Unlike the passages I reviewed for the PARCC test, I think the passages I examined from the SBAC test are fair representations of what children in those grades can and should be able to read.
  2. The questions asked about these passages seemed to me to be a good mix of comprehension based questions and analysis based questions. In general the questions seemed appropriate to the text.
  3. The passages chosen for the assessment all appeared to me to be straightforward enough that most students could follow them. There were no passages using archaic language or structures, no stories written long ago. Vocabulary was generally reasonable and often defined in the context of the passage.
Cautions:
  1. I sampled only one passage for each grade level, so other passages may have problems I did not see here. Only by actually having large numbers of students taking the test will we be able to tell if the test meets industry and common sense standards of validity and reliability. 
  2. Just because I have judged this test to be a reasonable test does not mean that I think this test, or any standardized test, can be used for making high stakes judgements about children, teachers or schools. The failure of standardized tests to be helpful in these areas has been well established. True understanding of individual readers' strengths and weaknesses is best done by professional educators working with children over time.
Test passages that offer most students at grade level the opportunity to demonstrate their actual reading ability can give teachers data that can help to inform instruction. In this cursory look at the SBAC test, it looks like these tests could meet that standard. Time will tell.