Posted by: Bill | December 8, 2008

Teach for America (and Journalism in General)

I read a post from Eduwonk this morning that got me all fired up. My dedicated readers, colleagues and friends have seen this happen before when journalists don’t do their homework. I will offer a quick disclaimer that I am not, nor have I ever been a journalist. I wish I had more friends who were journalists so they could yell at me and tell me why I am wrong about this. I become upset when paragraphs like this are thrown into an article:

Research into Teach for America’s effectiveness has been inconclusive, but at least three major studies in the past several years indicate that students taught by its teachers score significantly lower on standardized tests than do their peers. A small handful of other studies, and the organization’s own research, contradict that claim.

There are many things wrong with this paragraph and Eduwonk makes several good points about the entire article. I am going to stick to this paragraph for now and then make a comment about TFA.

  1. What is the difference between the number “3” (the major studies) and “a small handful”? I don’t know about you, but the latter sounds like more than 3.
  2. Obviously, we should not judge the strength of research based on volume alone. I doubt the writer spent any time looking at the studies themselves, their methodology, their authors and, most importantly, their funding source. This is an imperative for a journalist in my opinion to give readers a clear picture. As readers, we love hearing about studies and their praising/damning “conclusions”. I am guilty of this as well, but have spent a fair amount of time investigating the so-called “evidence”. I would rather not have to do that as I believe the journalist should be responsible for this, to some degree.
  3. I have not heard of these 3 studies that say negative things and I pay a fair amount of attention to the Education Policy world. I remember one such study from 2005. Its author has been a vocal critic of TFA for many years. Here is TFA’s response and a study that found different results. Note that Mathematica is an independent research group and TFA did not pay for the study.

As you may have guessed by now, I am a fan of Teach for America. The impact on hundreds of thousands of students is no doubt tremendous. Principals and administrators sing the praises of the organization and its Corps. I would argue that the research does not have to be overwhelmingly positive since the Corps are filling difficult to staff positions. Nevertheless, it has been mostly positive with regards to their impacts on student achievement.

An argument frequently advanced by their critics is that only 30-40% stay on past the initial 2 year commitment. This is true, but many stay in the world of education in some fashion. I have had the pleasure of working with many of these people and their contribution to ABCTE has been significant.

There is no silver bullet for tackling the issues many of us see with K-12 education in the U.S. TFA will only scale so far, but the positive impacts are difficult to argue against. The writer of this article was probably trying to “show both sides of the issue”. I applaud the attempt, but believe that there is a responsibility to do so with much more depth.

Advertisements

Responses

  1. […] Schimmel presents Teach for America (and Journalism in General) posted at No Cynics […]

  2. The Matematica study has become very popular with school reforms who support NCLB and the use of standardized testing as a measure of success. The Matematica study’s methodology reflects this shared assumption.

    “Our measures of student achievement were based on standardized mathematics and reading test scores.”

    For me the bigger question when it comes to TFA and like minded school reform movements is whether or not we can use standardized testing as a measure of quality instruction at all. I would say no and sight the following blog by Gerald Bracey at the Washingtonpost.com that suggests scores on standardized math and science tests have a questionable correlation to success on anything other than standardized tests.

    http://voices.washingtonpost.com/x-equals-why/2008/12/guest_blogger_intl_math_tests.html?sid=ST2008120901507&s_pos=list

    From my perspective the post-NCLB discussion of test scores and how teaching can change to improve test scores has been completely self-referential and distracted us, as educators, from looking for new approaches that measurably improve students communities, quality of life and decrease poverty rather than test scores.

    In order for me to buy into this test score rhetoric is for someone to show me DATA that shows higher test scores as being positively related to meaningful life events. Does scoring in the 90% rather than 80% percentile in math/science alone help decrease poverty, predict future happiness or income, is it related to longevity or incarceration rates? As far as I’m concerned this test score fetish is a cop out and a way to ignore foundational problems with of society and look at some numbers that we can control more easily.

  3. Kevin – I stopped listening to anything Gerald Bracey said a long time ago. For awhile, I even tried to read him for a sense of balance, but eventually was not able to do that either.

    That being said, I agree with you to a point regarding standardized tests. It’s not the ideal measure for student success, but presently it is what we have. I am confident that a fair number of researchers, corporations and others are looking for something better (but confess that I am not knowledgeable in that area).

    I hear about “authentic assessment” from time to time as well as getting away from selected response items. I think the time will come when the assessment industry does develop a better measure. I can’t imagine that a profit-seeking corporation is not trying to develop something new they can sell to states and districts.

    Regarding your last paragraph about longitudinal data, I think there is some out there correlating success in school to later income, etc. It has been years since I read anything like that though and I do not know what measures it used to correlate school performance. It may not have been standardized tests. I agree with you though. Perhaps NAGB (the maintainers of NAEP) or the people who run the TIMSS are working on such a longitudinal study. If not, someone should suggest they do so.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: