The release of Matt Nisbet’s Climate Shift report (and the opening of the Climate Shift Project website) has been surrounded by a storm of controversy, opened by Joe Romm’s critique of Nisbet’s financial analysis (follow up here and here; Chris Mooney on science ‘balance’, and Media Matters’ critique of Nisbet’s media analysis). I read a good deal of the report prior to actually reading anyone else’s work. And, my copy is covered with red ink of comments, questions, and challenges. A number of these red-lined items paralleled others’ critiques. Honestly, this frustrates me because the questions Nisbet asks and the answers to those are of interest and importance. The error-prone nature of the work makes it difficult to assess the value of conclusions and recommendations. Sadly, this report truly seems to require going through with a fine-tooth comb as there are major and minor problems, it seems, throughout the study.
To provide a perspective, here is an example of one of those tiny items. Table 3.1 from the report intrigued me. [UPDATE: Nisbet “randomly sampled within month one out of every four articles … resulting in a representative sample of 413 news and opinion articles.”]
Let us look at just one column[of this chart]: [the sample of] Washington Post opinion articles “pre-Copenhagen”. For 1 January through 30 November 2009, the table tells us that there were 25 opinion articles relevant to climate change/Global Warming of which 96 percent (24) were “consensus” and 4 percent (1) were “dismissive”.
[Recognizing that this is a sampling …]
Just one Washington Post opinion piece [out of every twenty-five on or related to climate science], from 1 January 2009 through 30 November 2009 was “dismissive” of climate science.
On 13 February 2009, The Washington Post published George Will’s “Dark Green Doomsayers“, a truthiness-laden set of misleading and simply false statements attacking climate science and scientists. Okay, does this count as that single “dismissive” piece? [Which means that there should be 24 ‘consensus’ pieces.]
Few phenomena generate as much heat as disputes about current orthodoxies concerning global warming. This column recently reported and commented on some developments pertinent to the debate about whether global warming is occurring and what can and should be done. That column, which expressed skepticism about some emphatic proclamations by the alarmed, took a stroll down memory lane, through the debris of 1970s predictions about the near certainty of calamitous global cooling.
No reasonable media studies analyst examining climate change science issues could put George Will (using Nisbet’s categories) as anything other than “dismissive” of climate science. [Okay, now I am looking for another 24 climate science supporting opinion pieces through the year.] Thus, just two months into Nisbet’s analytical period (Jan-Feb 2009) and just one opinion writer (George Will), [I see] there is reason to question the quantitative basis for the assertion that 96 percent of the Washington Post opinion pieces during the ‘pre-Copenhagen’ period supported the “consensus view” on climate science.
[On 2 April 2009, The Washington Post published George Will’s Climate Change’s Dim Lightbulbs. The second paragraph from this ode to climate science consensus:
Reducing carbon emissions supposedly will reverse warming, which is allegedly occurring even though, according to statistics published by the World Meteorological Organization, there has not been a warmer year on record than 1998. Regarding the reversing, the U.N. Framework Convention on Climate Change has many ambitions, as outlined in a working group’s 16-page “information note” to “facilitate discussions.”
Hmmm … Still with just one opinion writer, only in month four, and we are now looking for 72 opinion pieces to provide for the 96 to 4 percent ratio.]
How many other Washington Post opinion pieces might reasonably be called “falsely balanced view” or “dismissive” in the pre-Copenhagen (or post Copenhagen period)? In this quite specific case, within moments material comes to light that calls into question a detailed item that is built on to support conclusions. Is the quantitative analysis of news reporting also poorly done? How about the analysis of other journalistic outlets? I, in contrast to Nisbet, do not have $100,000s of dollars to support my research nor 10s of unpaid journalism students to comb through articles and code them. I do, however, have enough knowledge, at hand, to know that this specific item does not stand up to my standards of accurate and truthful analysis.
Now, Professor Nisbet has confirmed in email correspondance that he would put serial misrepresenter Bjorn Lomborg in the “consensus” camp. “Lomborg’s op-eds assert that climate is real and human caused, so he does not fall into the falsely balanced or dismissive category as measured in the analysis. ” This is the case as Lomborg’s work misrepresents the science and implications of climate change (see this discussion of Lomborg’s September 2009 Washington Post OPED).
One of my graduate advisors posed an interesting question when discussing a specific book:
I really like the thesis of this book across a wide range of historical cases, many of which I know little about. It reads well and makes sense to me. However, when I look at the specific items of my expertise, I find many errors and do not believe the author used the best secondary sources to support his work. Should I take my specific expertise and knowledge that leads to the conclusion that this is a poorly done work in one section to say that this is likely the case with the rest of the book or should I follow my agreement with the thesis to embrace the work done on those periods outside my expertise?
This led to a serious set of discussions within the class that has continued to inform my thinking to date. While I would like to focus a discussion on Nisbet’s conclusions and recommendations, the serial nature of the data/analytical issues makes any leap to serious attention to Nisbet’s conclusions a rather reckless one. Sadly, the shoddy nature of much of America’s media system (and the significant PR resources supporting Climate Shift) means that too many will be making that leap.
[UPDATE NOTE: As per the comment below from Matthew Nisbet, I should have emphasized the sampling nature of the work. Reviewing the post and starting a glance through other Washington Post opinion pieces for the January 2009 through November 2010 period, the question remains as to whether this table (this material) accurately represents The Washington Post‘s publication record.]
UPDATE TWO: Others have taken up the issue of whether Nisbet’s “sampling” at The Washington Post seems to fit a reality-based analysis. Sadly, just as with the financial figures, one has to go back and do original research to place the work within context. Tim Lambert, Deltoid, did this as discussed in There’s no fooling Bryan Walsh.
I was intrigued by some of the other numbers in Nisbet’s paper. He found that in the Washington Post in the 11 months before Copenhagen 93% of the articles reflected the scientific consensus, 5% were falsely balanced and just 2% dismissive of the consensus.
This suggests that “false balance” was all but absent from the Washington Post during that period, when in fact the Washington Post was indulging in a pathological version of false balance, deciding that George Will was entitled to his own facts. In the Washington Post a statement from the Polar Research Group can be balanced by a falsehood from George Will about what the Polar Research Group said.
I decided to look at the those articles myself. I selected the sample in the same way as Nisbet, except that I used Factiva rather than LexisNexis, and used all the articles rather than 1 in 4. I found that 110 (76%) reflected the consensus view, 28 (19%) were falsely balanced, and 7 (5%) were dismissive. Falsely balanced articles reporting on the science (like this one) were very rare. Instead, the falsely balanced articles were about politics, with the science being balanced by a statement from Inhofe that it was all a big hoax.