Get Energy Smart! NOW!

Blogging for a sustainable energy future.

Get Energy Smart!  NOW! header image 2

Nisbet’s “Climate Shift” and where did they get these numbers (Item #374)

April 21st, 2011 · 4 Comments

The release of Matt Nisbet’s Climate Shift report (and the opening of the Climate Shift Project website) has been surrounded by a storm of controversy, opened by Joe Romm’s critique of Nisbet’s financial analysis (follow up here and hereChris Mooney on science ‘balance’, and Media Matters’ critique of Nisbet’s media analysis).     I read a good deal of the report prior to actually reading anyone else’s work.  And, my copy is covered with red ink of comments, questions, and challenges.  A number of these red-lined items paralleled others’ critiques.  Honestly, this frustrates me because the questions Nisbet asks and the answers to those are of interest and importance. The error-prone nature of the work makes it difficult to assess the value of conclusions and recommendations.  Sadly, this report truly seems to require going through with a fine-tooth comb as there are major and minor problems, it seems, throughout the study.

To provide a perspective, here is an example of one of those tiny items.  Table 3.1 from the report intrigued me. [UPDATE: Nisbet “randomly sampled within month one out of every four articles …  resulting in a representative sample of 413 news and opinion articles.”]


Let us look at just one column[of this chart]: [the sample of] Washington Post opinion articles “pre-Copenhagen”. For 1 January through 30 November 2009, the table tells us that there were 25 opinion articles relevant to climate change/Global Warming of which 96 percent (24) were “consensus” and 4 percent (1) were “dismissive”.

Hmmmm …

[Recognizing that this is a sampling …]

Hmmmm …

Just one Washington Post opinion piece [out of every twenty-five on or related to climate science], from 1 January 2009 through 30 November 2009 was “dismissive” of climate science.

This was the period of what the Columbia Journalism Review called The Will Affair.  (For a partial annotated bibliography, see: The Will Affair: Struggling to keep up.)

On 13 February 2009, The Washington Post published George Will’s “Dark Green Doomsayers“, a truthiness-laden set of misleading and simply false statements attacking climate science and scientists.  Okay, does this count as that single “dismissive” piece? [Which means that there should be 24 ‘consensus’ pieces.]

On 26 February 2009, The Washington Post published George Will’s “Climate Science in a Tornado” (in which Will called The New York Times prostitutes). This paean to climate science began:

Few phenomena generate as much heat as disputes about current orthodoxies concerning global warming. This column recently reported and commented on some developments pertinent to the debate about whether global warming is occurring and what can and should be done. That column, which expressed skepticism about some emphatic proclamations by the alarmed, took a stroll down memory lane, through the debris of 1970s predictions about the near certainty of calamitous global cooling.

No reasonable media studies analyst examining climate change science issues could put George Will (using Nisbet’s categories) as anything other than “dismissive” of climate science. [Okay, now I am looking for another 24 climate science supporting opinion pieces through the year.] Thus, just two months into Nisbet’s analytical period (Jan-Feb 2009) and just one opinion writer (George Will), [I see] there is reason to question the quantitative basis for the assertion that 96 percent of the Washington Post opinion pieces during the ‘pre-Copenhagen’ period supported the “consensus view” on climate science.

[On 2 April 2009, The Washington Post published George Will’s Climate Change’s Dim Lightbulbs.  The second paragraph from this ode to climate science consensus:

Reducing carbon emissions supposedly will reverse warming, which is allegedly occurring even though, according to statistics published by the World Meteorological Organization, there has not been a warmer year on record than 1998. Regarding the reversing, the U.N. Framework Convention on Climate Change has many ambitions, as outlined in a working group’s 16-page “information note” to “facilitate discussions.”

Hmmm … Still with just one opinion writer, only in  month four, and we are now looking for 72 opinion pieces to provide for the 96 to 4 percent ratio.]

How many other Washington Post opinion pieces might reasonably be called “falsely balanced view” or “dismissive” in the pre-Copenhagen (or post Copenhagen period)?  In this quite specific case, within moments material comes to light that calls into question a detailed item that is built on to support conclusions.   Is the quantitative analysis of news reporting also poorly done?  How about the analysis of other journalistic outlets?  I, in contrast to Nisbet, do not have $100,000s of dollars to support my research nor 10s of unpaid journalism students to comb through articles and code them.  I do, however, have enough knowledge, at hand, to know that this specific item does not stand up to my standards of accurate and truthful analysis.

Now, Professor Nisbet has confirmed in email correspondance that he would put serial misrepresenter Bjorn Lomborg in the “consensus” camp. “Lomborg’s op-eds assert that climate is real and human caused, so he does not fall into the falsely balanced or dismissive category as measured in the analysis. ” This is the case as Lomborg’s work misrepresents the science and implications of climate change (see this discussion of Lomborg’s September 2009 Washington Post OPED).

One of my graduate advisors posed an interesting question when discussing a specific book:

I really like the thesis of this book across a wide range of historical cases, many of which I know little about. It reads well and makes sense to me. However, when I look at the specific items of my expertise, I find many errors and do not believe the author used the best secondary sources to support his work.  Should I take my specific expertise and knowledge that leads to the conclusion that this is a poorly done work in one section to say that this is likely the case with the rest of the book or should I follow my agreement with the thesis to embrace the work done on those periods outside my expertise?

This led to a serious set of discussions within the class that has continued to inform my thinking to date.  While I would like to focus a discussion on Nisbet’s conclusions and recommendations, the serial nature of the data/analytical issues makes any leap to serious attention to Nisbet’s conclusions a rather reckless one.  Sadly, the shoddy nature of much of America’s media system (and the significant PR resources supporting Climate Shift) means that too many will be making that leap.

[UPDATE NOTE:  As per the comment below from Matthew Nisbet, I should have emphasized the sampling nature of the work.  Reviewing the post and starting a glance through other Washington Post opinion pieces for the January 2009 through November 2010 period, the question remains as to whether this table (this material) accurately represents The Washington Post‘s publication record.]

UPDATE TWO:  Others have taken up the issue of whether Nisbet’s “sampling” at The Washington Post seems to fit a reality-based analysis.  Sadly, just as with the financial figures, one has to go back and do original research to place the work within context.  Tim Lambert, Deltoid, did this as discussed in There’s no fooling Bryan Walsh.

I was intrigued by some of the other numbers in Nisbet’s paper. He found that in the Washington Post in the 11 months before Copenhagen 93% of the articles reflected the scientific consensus, 5% were falsely balanced and just 2% dismissive of the consensus.

This suggests that “false balance” was all but absent from the Washington Post during that period, when in fact the Washington Post was indulging in a pathological version of false balance, deciding that George Will was entitled to his own facts. In the Washington Post a statement from the Polar Research Group can be balanced by a falsehood from George Will about what the Polar Research Group said.

I decided to look at the those articles myself. I selected the sample in the same way as Nisbet, except that I used Factiva rather than LexisNexis, and used all the articles rather than 1 in 4. I found that 110 (76%) reflected the consensus view, 28 (19%) were falsely balanced, and 7 (5%) were dismissive. Falsely balanced articles reporting on the science (like this one) were very rare. Instead, the falsely balanced articles were about politics, with the science being balanced by a statement from Inhofe that it was all a big hoax.

Be Sociable, Share!

Tags: analysis · climate change · environmental · George Will · journalism · Washington Post

4 responses so far ↓

  • 1 Matthew Nisbet // Apr 21, 2011 at 9:30 am

    Before writing about a technical analysis, you should talk to the author of the analysis.

    What is the standard that should be used for this. A question to Professor Nisbet:

  • How many (and which) environmental organizations did you ask to read and comment on a prepublication version of your report?
  • Media content analysis is one of my academic specialties. Two studies I published on coverage of science-related policy debates using the same methods are among the most cited studies at those journals over the past decade, top journals in the field where I serve on the editorial boards.

    Max Boykoff, who’s approach I replicate, was consulted on the design early on and reviewed the full results and has no problems with the methodology or findings.

    If you had called me and we had spoken, I would have told you what you think is an error reflects analysis based on a representative sample of opinion coverage at the WPOST rather than a full population of opinion articles. Boykoff uses a similar sampling strategy in his studies, as do the great majority of studies by media content researchers.

    The post is being modified to discuss that this is a sampling … that misrepresents the tenor / tone of the Washington Post‘s published opinion pieces over this time period.

    Please call me so you can issue a correction to your post. The post is simply irresponsible and misleading to your readers.

    Similar mistakes relative to understanding sampling were made by Media Matters for America, who similarly never called the me as the researcher to talk about the technical analysis.

  • 2 TTT // Apr 21, 2011 at 6:13 pm

    Nisbet, you really just can’t deal with the Internet, can you? Your famous meltdown on Scienceblogs, about how you just can’t communicate via blogs the way everybody ought to hear you, has not been forgotten. Is that why you keep getting all sniffy that your critics didn’t give you a phone call first?

    Maybe YOU should have made some phone calls to actual environmental scientists before you tried to re-cast their identifications of Bush-era scientific obstructions as being mere “ideological views.”

    Maybe YOU should have called actual climate scientists before you made that ridiculously irresponsible and made-up graph that suggests environmental groups had access to the entire lobbying budget of Wal-Mart and General Electric.

    And maybe–no, definitely–you should not confuse your “media content analysis” skills, such as they are, as being a real science that qualifies you to judge the particulars of a problem like this.

  • 3 Terrible climate messaging from a Google communications specialist+* // Apr 22, 2011 at 12:28 pm

    […] Nisbet’s “Climate Shift” and where did they get these numbers (Item #374) […]

  • 4 Washington Post advocated practicing journalism: Did they fulfill the mandate? // Jun 9, 2011 at 9:09 pm

    […] some discussions (with further links) about Washington Post climate journalism, see (for example) Nisbet’s “Climate Shift” and where did they get these numbers (Item #374) and Energy Bookshelf: The Lomborg Deception … leads to a question: “Does the Washington Post […]