Allegations of widespread “false balance” in mainstream news coverage of climate change– as well as other mainstream media portrayals– have been voiced for more than a decade. Among the strongest voices asserting widespread false balance has been blogger Joe Romm.  Consider these examples, which are representative of many similar claims consistently voiced by Romm:

Both Romm and advocacy organizations such as Media Matters for America raise their financial support and define their professional roles as watch dogging the mainstream media, asserting that consistent false balance in mainstream coverage at leading outlets such as the NY Times or the Washington Post remains a major barrier to political action on climate change and that conservative media like Fox News have a powerful impact on wider public opinion.

The most recent statements made by Romm about the analysis in the Climate Shift report offer the opportunity to elaborate on how we can accurately assess the differences between the objective nature of patterns in mainstream media coverage and how this coverage might be subjectively and anecdotally perceived  by professional bloggers and media watchdogs.

Below I address several of these questions as they apply specifically to the analysis in the Climate Shift report.  I also address several other assertions in blogger Joe Romm’s latest post.  See additional responses to Joe Romm and bloggers at Media Matters.

1. What constitutes “false balance” in news coverage of climate change?

As Chapter 3 in the Climate Shift report reviews, objectively defining what constitutes “false balance” in coverage of climate change is difficult, just as it is in coverage of politics generally.  As I return to in later sections, to validly and reliably measure false balance systematically across media coverage, researchers need to focus their measurement efforts on a clear, objective standard for assessing claims.

As researcher Max Boykoff has described, relative to the assertions that CO2 warms the planet or that humans contribute to climate change, there is overwhelming scientific agreement, and therefore a clear objective basis upon which to criticize the media if they fail to accurately convey this consensus.  However, other dimensions that still hold higher degrees of scientific uncertainty – such as the linkages between climate change and hurricane intensity, or on matters of political disagreement, such as if cap and trade legislation is an effective solution – remain subjects where journalists justifiably should emphasize a greater diversity of views. [See discussion in Climate Shift report]

[See also later discussion Q 11 on other forms of false balance that are more difficult to reliably and validly measure.]

2. Can you accurately assess systematic patterns in false balance through anecdotal observation, personal reading of articles or via blogging?

For a number of reasons, accurately assessing systematic patterns of bias across news coverage is not possible without turning to reliable and valid social scientific methods for measurement and analysis.  In fact, relying on the statements and judgments of bloggers or financially-invested advocates is probably the least reliable and valid method available for understanding patterns of portrayals across coverage.

Given a strong commitment to action on the issue and a strong political outlook, advocates like blogger Joe Romm tend to notice and highlight each instance of dismissive media commentary or falsely balanced coverage of climate change while tending to overlook (or go without mentioning) the many other instances of coverage where consensus views on climate change are strongly asserted.

As a psychological tendency, past research shows that among those most strongly committed to a position on an issue; even objectively favorable coverage is likely to be viewed as biased and hostile to their goals.  [See research on the hostile media effect.]

Every journalist has likely experienced this effect in covering the climate change debate or other intensely polarized debates, as a particular story is harshly criticized by commenters, phone callers, or letter writers from both tail ends of the spectrum of views on the issue.

These types of hostile media perceptions and claims are even more likely the case when the financial support for a professional blogger depends in part on maintaining the assertion that pervasive false balance in coverage exists.

Similarly, as mentioned by Romm in his latest post, Australian blogger Tim Lambert purports to have “redone” the media analysis in the Climate Shift report and to have arrived at different results for the Washington Post.

Yet Lambert’s lone interpretative reading of the articles reflects his own individual subjective interpretation, an interpretation colored by his many past blog posts asserting the widespread prevalence of false balance in news coverage of climate change.

For these reasons and others, it is not possible to reach valid and reliable judgments about systematic patterns in news coverage either through anecdotal blogging about specific articles or by a lone individual reading through a population of articles, especially when the individual has a deep and longstanding commitment to a belief in widespread false balance.

3. How do social scientists validly and reliably measure media content?

Over the past several decades, communication researchers and other social scientists have developed a set of quantitative content analysis procedures as a methodological tool for validly and reliably measuring trends and portrayals in media coverage.

These procedures usually start by first determining a population of relevant articles on an issue using a set of carefully chosen and tested key words.  From this population of coverage, researchers then usually draw a representative sample.  Drawing a representative sample is especially important when you are analyzing a large body of coverage as in the Climate Shift report which examines coverage across 2 years at 5 different major news organizations.

A valid coding frame is developed based on past research and further refined by pilot testing against a diversity of example articles.  Once the coding frame is finalized, a team of graduate students – usually 3-5 in number – are recruited.

Through a series of meetings, the graduate students review a diversity of example articles, rate the same articles using the coding frame, and then discuss among themselves and with the researchers where and why their coding judgments might differ.  As part of this process, additional decision criteria are developed and applied to help coders validly and reliably assess features of content relative to the coding frame.

Once the graduate student coders appear to be validly and reliably arriving at the same judgments consistently across sample articles, they are then formally and statistically tested.  In coding for content that is expected to be rare such as falsely balanced coverage or dismissive coverage (see Q#3 for details), a purposive reliability sample should be chosen.

On this decision, consider the following analogy.  If you are testing whether or not an Airport screener can reliability identify a gun or weapon as part of a passenger’s hand luggage, you would not randomly sample from a list of passengers and assign them to the screener’s line.  The chances that the Airport screener would be presented with a “true” positive to detect would be quite low.

Instead, you would select (i.e. purposefully include) a diversity of types of hidden weapons to conceal in the baggage as a way to test whether the screener can correctly identify luggage containing a weapon.

Similarly in testing coders rating the appearance of consensus, falsely balanced, or dismissive media portrayals of climate science, you would want to purposively select a test sample that contains a significant number of each of these types of portrayals.

To formally test the coders in how reliably they are applying the coding frame to assess news coverage, each of the coders rates the same purposive test sample.  Their results for each article are entered into SPSS and compared using a standard statistical test called Krippendorf’s Alpha, a test that measures the percentage of times the coders agree correcting for chance agreement.

Typically in coding complex media portrayals such as those of climate change, a Krippendorf’s alpha coefficient of .70 or higher is considered sufficient agreement.  Moreover, the more coders used in the test and the greater the number of articles tested, the higher the confidence in the results.

The goal here is to ensure inter-subjectivity in assessing media portrayals.  With a carefully trained team of graduate students and a sufficient score in terms of reliability, the expectation is that if a separate team of 3-5 trained coders were asked to code a different probability sample of coverage drawn from population of coverage at the same news organizations using the same coding frame, they would arrive at approximately similar results.

4. How does past research by Max Boykoff offer us a valid and reliable picture of trends relative to false balance in coverage of climate science?

Max and Jules Boykoff in a now oft-cited 2004 paper sampled approximately 1 out of 5 climate change-related articles (18%) appearing between 1998 and 2002 at the NYTimes, WPost, WSJ, and LA Times.  Using content analysis procedures to assess media portrayals based on narrow and therefore measurable definition of bias (see question 2 above), they found that across these years, 52% of articles falsely balanced scientific consensus views on the reality and human causes of climate change with dismissive views.

As discussed at the opening of Chapter 3, the “balance as bias” study by Boykoff and Boykoff examining coverage through 2002 was famously featured in Al Gore’s An Inconvenient Truth and remains widely referenced today when bloggers such as Joe Romm assert that false balance remains a pervasive problem in mainstream media coverage.

Yet as Chapter 3 also notes, these same advocates overlook a 2007 follow-up study by Boykoff titled “Flogging a Dead Norm.”  In this study, Boykoff sampled 1 out 6 articles appearing at the NY Times, WPost, WSJ, LA Times and USA Today between the years 2003 and 2006.

These papers were chosen by Boykoff because as he described previous research shows that these papers impact policy discourse and decision-making and because these papers also influence coverage decisions at other news outlets.  As he wrote: “News coverage in these papers therefore provides opportunities to track the dominant news frames associated with anthropogenic climate change.” (See page 2 of study.)

In his analysis, Boykoff found that across successive years the proportion of falsely balanced coverage decreased so that by 2006, 97% of coverage across the five newspapers reflected the consensus scientific view on the reality and causes of climate change. (The findings across years was 61% articles portraying the consensus view in 2003; 90% in 2004; 92% in 2005; and 97% in 2006).  (See page 5 of study).

5. In the Climate Shift analysis, how do you build on the research of Max Boykoff to validly and reliably assess trends in false balance for the years 2009 and 2010?

The goal of the Climate Shift report is to provide research and analysis that informs decision making by environmentalists, scientists, philanthropists and others as they make strategic decisions on how to move forward on the societal challenges related to climate change.

So in Chapter 3, drawing on standard social science content analysis procedures and the measures used by Boykoff, I provide the first reliable and valid data evaluating systematic patterns in mainstream coverage of the reality and causes of climate change for the key political period of 2009 and 2010.

The analysis examined the following central research questions, translated for readers at the opening of the chapter:

If false balance [in coverage of the reality and causes of climate change] had virtually disappeared from national coverage as of 2006, did this same tendency hold in 2009 and 2010, as cap and trade legislation was debated, meetings on a binding international emissions treaty were held in Copenhagen, and groups strongly dismissive of climate science were presumed to have gone into high gear, their communication efforts fueled by a controversy over the surreptitiously released emails of climate scientists?

6. How did you choose the news outlets and articles for your analysis?

Building on Boykoff’s work, I analyzed coverage at the 3 leading news organizations that he analyzed in the 2004 and 2007 studies (NYTimes, WPost, and WSJ).

Like Boykoff, I chose these 3 news organizations because previous research shows that these outlets have major impacts on policy discourse and decision-making and because these outlets also influence coverage decisions at other news outlets.

I also examined coverage at the Politico and CNN.comThe Politico is among the leading trend-setting outlets for political reporting and is the “newspaper for Capitol Hill.”  CNN.com is the 4th most widely read news source online and also syndicates its coverage to other outlets.  Moreover, despite their prominence, coverage of climate change at Politico and CNN.com has not previously been analyzed.

[See Chapter for more discussion as to why these 5 outlets were chosen.]

At these outlets, relevant climate change-related articles were identified by searching LexisNexis between Jan. 1, 2009, and Dec. 31, 2010, retrieving all articles that in the headline or lead paragraph included “climate change” or “global warming.”  The returned articles were individually reviewed by a team of graduate students; duplicates and non-relevant articles were discarded, resulting in a final population of 1,862 climate change-related articles from the five news organizations.  These articles included news stories, style features, magazine stories, film and book reviews, columns, in-house editorials, op-eds and letters to the editor.

Similar to Boykoff, a representative sample of articles was chosen for analysis.  Approximately one out of every four articles (22%) appearing at the five news organizations across the period Jan. 1, 2009, to Dec. 31, 2010 were randomly chosen, resulting in a representative sample of 413 news and opinion articles.

7.  How did graduate student coders validly and reliably evaluate coverage at these outlets?

The content analysis procedures followed the standard methods detailed in question #3 above.   Three graduate students were trained across multiple, several hour meetings, reviewing and applying the coding frame to example articles, and defining decision criteria for selecting among coding categories and rating judgments.

The three graduate students were asked to score each article using a measure similar to that used in the Boykoff studies, recording whether the article conveyed the “consensus view” that climate change is real and that humans are a cause; the “falsely balanced view” that it is uncertain whether climate change is real and/or that humans are a cause; or the “dismissive view” that either climate change is not occurring or, if so, humans are not a cause.

To test for inter-subjectivity and reliability in coding, the three graduate students were tested on a common, purposively chosen sample of 45 articles (see Question #3 above).  As evidence of inter-subjectivity and reliability, the students agreed on coding decisions 72 percent of the time, with this test correcting for chance agreement (K-alpha =.72).

8. What were the main results of the quantitative content analysis assessing the portrayal of the reality and causes of climate change?

During the first nine months of 2009, at least 93 percent of all news and opinion articles published by the five news outlets reflected the consensus view that climate change is real and that humans are a cause.  Between October 2009 and March 2010, as the Copenhagen meetings took place and debate over Climategate occurred, 75 percent of all articles reflected the consensus view.  For the rest of 2010, as the Senate bill was debated and the mid-term elections took place, approximately 85 percent of coverage reflected scientific consensus.

I also examined each specific news organization’s coverage from Jan. 1, 2009, and Nov. 30, 2009, (before Copenhagen) and then between Dec. 1, 2009, and Dec. 31, 2010 (during and after Copenhagen).

Across the two periods, at The New York Times, The Washington Post and CNN.com, approximately nine out of 10 news and opinion articles reflected the consensus view on climate change.  At Politico during this period, at least seven out of 10 articles portrayed the consensus view.  Only at The Wall Street Journal did this trend not hold up, yet even in this case, the difference in portrayal was confined largely to the opinion pages. Across the two-year period, at least eight out of 10 news articles at the paper reflected the consensus view, but at the opinion pages, less than half of articles asserted that climate change was real and that humans were a cause.

[See full discussion at section of chapter.]

9.  Do columns published in the 2009 period leading up to Copenhagen at the Washington Post by George Will challenge the findings of the media analysis, as blogger Joe Romm claims?

This is a good example of why it is  unreliable to base judgments about patterns in mainstream news coverage based on anecdotal observation and/or statements made by bloggers.

The presence of these opinion articles by Will in the population of coverage at the Washington Post for 2009 is consistent with the findings from the analysis of the representative sample of coverage during this period.  Based on a sample of 25 editorials, op-eds, columns and combined letters drawn from approximately 100 that appeared at the paper between Dec. 1 and Nov. 30, 2009, the analysis estimates that approximately 4% of these editorials, columns, op-eds and combined letters-to-the editor presented a predominantly “dismissive” view of the reality and causes of climate change.

Keep in mind there are error bars around this estimate, just as there are in previous studies by Bokyoff using sampling techniques and every other media analysis previously published that uses sampling techniques.  This is the nature and limitation of representative sampling and content analysis, a technique necessary in order to analyze trends across 5 news organizations, two years, and more than 1800 news and opinion articles.

Moreover, the value of using quantitative content analysis procedures like those in the Climate Shift report is ensure inter-subjectivity in assessing media portrayals.  Given the careful training of graduate students and their scores in terms of reliability, the expectation is that if a separate team of 3-5 specially trained coders led by academic researchers were asked to code a different probability sample of coverage from the same news organizations using the same coding frame, they would arrive at approximately similar results.

Yet even given the margin of error for sampling and measurement, whether the true population parameter for the proportion of dismissive opinion page coverage is 4%, 10%, or 12%, the conclusion of the report would remain the same.  As an average tendency across articles, the opinion pages at the Washington Post –with few exceptions– consistently portrayed (i.e. in 9 out of 10 opinion page articles) the scientific consensus views on the reality and causes of climate change.

[See additionally Q 11 for discussion of other forms of false balance that should be examined and the possible reinforcing impact on reader opinion of op-eds such as Will’s.]

10. What are your qualifications for carrying out the quantitative content analysis procedures used in the Climate Shift report?

I have used similar content analysis procedures in several previously published peer-reviewed studies of media coverage of science-related policy debates.  Two of these articles are the most heavily cited studies published at their respective journals over the past decade.  These journals are leading outlets in the fields of science communication and political communication respectively.  In recognition of my work on these studies and others, I serve on the editorial boards at these journals.

As a referee at these journals and many others, I regularly review similarly conducted content analysis studies of media coverage of science-related debates.  I also teach these same quantitative content analysis procedures in undergraduate and graduate research methods courses.

11.  How do you intend to build on the data collected and analyzed in the chapter on media coverage?

For the media analysis, I have also collected similar data specific to coverage at major regional newspapers in Michigan, West Virginia and South Carolina, data which once analyzed will provide indicators on what the local media environment was like in these politically important states and Districts during the cap and trade legislative debate.   The national data analyzed in the report along with this local analysis will be combined into a single longer study focused on media trends and submitted to a relevant journal in the field of science communication or political communication.

12.  What about examining other areas of coverage and other forms of false balance?

The analysis conducted in the Climate Shift report is the first publicly released academic study to assess coverage of climate science during the key political period of 2009 and 2010.  I encourage other independent, academic researchers using standard content analysis procedures to similarly conduct valid and reliable measurement of trends in media portrayals of climate change during this period.

As I conclude in the chapter, other dimensions of false balance are also important to examine, though much more difficult to objectively assess.  In particular, it is important for academic researchers to examine how claims about the economic costs of climate action were portrayed in media coverage and what impact these claims might have had on public opinion.

Also, as I conclude in the chapter and discuss more in depth in Chapter 4, even a few falsely balanced or dismissive articles such as those by George Will can have important reinforcing effects on the views of readers already doubtful about climate change or cap and trade legislation, especially in the context of an economic recession and right-of-center political mood during the period 2009 to 2010.

13. How was Max Boykoff consulted on the media analysis and what role did he play as a reviewer?

Before beginning the media research, I spoke with Max Boykoff at length over the phone about the content analysis methods he used in the 2004 and 2007 studies and asked him to provide feedback on coding categories, how news organizations were chosen, the sampling strategy pursued, the training of coders, and in regards to other relevant studies that could be used to inform the analysis, interpretation of results and write up.

Previous to beginning formal work on this project and consulting him as a reviewer, I had also spoken with him at length about the media issues addressed in the report, as part of our ongoing relationship as scholars examining dimensions of climate change communication.

Following data collection, I sent Max tables of preliminary findings and again spoke with him over the phone about their interpretation, the methods and procedures used.

Max then reviewed a full draft of the report as well as the specific media analysis chapter.  He provided written feedback as part of his review.  During the revision process, I incorporated his recommendations on the media chapter and other parts of the report.

14. Why were reviewers paid an honorarium?

Boykoff and the other 4 reviewers were asked to review a draft report that totaled more than 21,000 words, drew upon hundreds of citations to past studies and resources, and that features 23 Tables and 12 Figures worth of complex data and information.  Several of the reviewers were consulted early on about research design, several reviewed multiple revisions on chapters, and all were consulted over the phone or in person following the first round of review.  In light of the time and effort they dedicated to reviewing the report, they were offered an honorarium.

Scholars commonly receive honorarium for reviewing book length manuscripts similar in scope to the Climate Shift report.  For example, I have been paid an honorarium for reviewing book and textbook manuscripts by a range of publishers.

15. In the report, as blogger Joe Romm asserts, do you downplay or dismiss the role of Fox News and conservatives on public perceptions, thereby creating a false narrative?

Romm falsely asserts that I misrepresent findings from recent papers examining the impacts of Fox News on viewer perceptions and the impact of Climategate on wider public opinion.  Romm also falsely asserts that “I dismiss” the impact of conservative media on public opinion attempting to contrast this supposed dismissal with alleged statements from “other media experts he has talked to…”

Nowhere in the report do I dismiss the influence of Fox News or conservative media on public opinion. Instead I draw on studies and data to call attention to other influences that might be just as – if not likely more  influential than conservative media on perceptions.

It is important to recognize and understand these influences in order to effectively engage the public. On this  dimension, see this discussion I did at Climate Central.

Consider the following summary of where I address in the report the role of Fox News, conservative media, or conservatives generally:

  • The media analysis in Chapter 3 is very clear in noting that the findings from the opinion pages at the Wall Street Journal are consistent with other studies that show that News Corp owned outlets like Fox News tend to dismiss the reality and causes of climate change.
  • At the end of Chapter 3, I emphasize that even though few instances of false balance or dismissive coverage might be found in the mainstream outlets examined, even a few op-eds or instances of coverage might have an influence on audiences, especially those predisposed to already doubt climate change.
  • In Chapter 4, I discuss findings from a study led by my colleague at AU Lauren Feldman who finds that Fox News (not surprisingly) features a stronger proportion of dismissive statements about climate change than either MSNBC or CNN.

Instead of dismissing the impact of Fox News, conservative media, or Climategate, the report suggests that understanding the impact of these outlets is only one part of the much bigger puzzle of what has fueled a downturn in public belief and concern in climate change since 2007.

As an average tendency, the effects of ideologically-slanted media are associated with limited and/or reinforcing effects on opinion, especially in today’s world of audience fragmentation.  This complex, interactive relationship has been tracked across more than 40 years of research in the field of media effects (a field in which I have published extensively), is suggested by the few studies conducted examining how cable news influences perceptions of climate change, and is accurately reviewed and referenced in the chapter.

Consider other findings from Jon Krosnick’s study on Fox News viewers that are discussed in the report, but so far conveniently overlooked by Romm since they speak to the relatively limited influence for Fox News in shaping views of climate change.  In the Krosnick study, despite the dismissive content frequently found at Fox News, a majority of even the heaviest Fox News viewers endorsed the views of mainstream scientists.

Here is what Krosnick writes:  “It is interesting to note that even among the heaviest Fox News viewers, about 50% or more endorsed the views of mainstream scientists. In no instance do we see a sizable majority of Fox News viewers disagreeing with most mainstream scientists or expressing little trust in scientists.”

Finally, in attempting to assert that I am creating a “false narrative” and “dismissing” the impact of Fox News or conservative media, Romm again tries to pit what I wrote in the chapter against the findings of research by two colleagues and collaborators — Ed Maibach and Lauren Feldman– who both reviewed the chapter, Maibach as a formal reviewer and Feldman informally as my neighbor across the hallway from me at American University.

16. Blogger Joe Romm cites a 2009 article at the New York Times quoting you as suggesting that some of the sharp responses from people like Romm to the George Will columns at the Washington Post might backfire in several ways.  What was the context for this analysis provided to the NY Times?

I had recently published a paper at the journal Environment: Science and Policy for Sustainable Development. I was asked by Andrew Revkin, based on this paper, to discuss the likely impact for An Inconvenient Truth on wider audiences and also the possible effects on public opinion from the type of elite debate (especially online) that was escalating over the columns by Will at the Washington Post, a debate consistent with more general patterns of polarizing and reinforcing elite driven controversy that I had addressed in the paper.

Moreover, the expert comments I provided to the New York Times were based not only on the recent paper at Environment that addresses directly these questions but also the body of work I had done over more than a dozen peer-reviewed studies, articles and other scholarly works where I have examined the framing of science-related policy debates and the influence on public opinion.

Here is the full set of quotes featured in the New York Times article:

In a paper being published in the March-April edition of the journal Environment, Matthew C. Nisbet, a professor of communications at American University, said Mr. Gore’s approach, focusing on language of crisis and catastrophe, could actually be serving the other side in the fight.

“There is little evidence to suggest that it is effective at building broad-based support for policy action,” Dr. Nisbet said. “Perhaps worse, his message is very easily countered by people such as Will as global-warming alarmism, shifting the focus back to their preferred emphasis on scientific uncertainty and dueling expert views.”

But Dr. Nisbet said that for Mr. Will, there was little downside in stretching the bounds of science to sow doubt.

Criticism of Mr. Will’s columns, Dr. Nisbet said, “only serves to draw attention to his claims while reinforcing a larger false narrative that liberals and the mainstream press are seeking to censor rival scientific evidence and views.”

[For an argument for the need to end the “war” mentality on climate change as exemplified by the reaction to George Will, see this essay by Jonathon Foley, director of the Institute of the Environment at the University of Minnesota.]

17.  Blogger Joe Romm asserts that the media materials related to the report misrepresent the content analysis findings.  Is this true?

Romm has pulled phrases from media materials –which by constraint have to condense down a 100 page data-filled, technical report into a few pages and bullet points — to assert that the media materials are in some way intentionally misleading.

Romm, for example, cites the phrase from the executive summary that the  “era of false balance in media coverage is over.” This phrase is a direct reference to the findings evaluating the 5 trend-setting mainstream news organizations where coverage – with the exception of the WSJ opinion page—was found to overwhelmingly portray the consensus views on climate science.  This finding is consistent with Boykoff’s findings that as of 2006, false balance relative to the fundamentals of climate science had virtually disappeared from coverage at the 5 national trend-setting organizations he examined.  [See Q #8 above.]

Romm also cites the following sentence from the executive summary as misleading: “In comparison to other factors, the impact of conservative media and commentators on wider public opinion remains limited.”  This statement is directly in line with the full discussion and focus on factors including media coverage featured in Chapter 4 of the report.  See also discussion at #15 above.

As Romm also quotes, Australian blogger Tim Lambert points to an unspecified “key finding” that in Lambert’s view intentionally leaves out analysis of the Wall Street Journal. For Lambert’s statement, I can only assume he is referring to the side bar of the Climate Shift Project web site which as a bulleted side bar links readers directly into the section of the report where the WSJ journal findings and other News Corp outlets are discussed at length.

These tactics by Romm and Lambert appear aimed at intentionally distorting the report. They also speak to a deep and underlying contempt and distrust of journalists. The two bloggers appear to believe that journalists would run a story without actually reading the key relevant sections of the report, speak to the researcher about the complexities of the data, the background of the findings, the nature of the measurement of topics such as false balance, and the limits as to what can be said in terms of conclusions.

The irony is that it is bloggers like Romm who have never spoken to me about the report and who have engaged in a series of misleading errors and distortions.

16. Who funded the report?

The report was funded by a $100,000 grant from the Ecological Innovation program at the Nathan Cummings Foundation. The goal of the Ecological Innovation program is to “address the challenges of climate change and to promote vibrant and sustainable ecological systems that support healthy communities and a just economy.” Past recipients of funding from the Ecological Innovation program include universities such as the Massachusetts Institute of Technology, the University of Wisconsin-Madison, and the University of Maryland; media organizations such as PBS Frontline/World and the Investigative Reporting Workshop; think tanks such as the Center for American Progress, the Breakthrough Institute and the Third Way Institute; and environmental groups such as the Clean Air Task Force and Friends of the Earth.

Deliberately reflecting a broad range of views about the climate change challenge and where to go from here, the report does not make any specific recommendation or endorsement as to policy direction or policy strategy.  Instead, in the report I provide data and analysis intended to inform decision-making by a broad community of organizations working on the climate change and energy problem including most especially major environmental groups and science organizations.

I have worked with this community  for the past half decade and continue to work with this community as I dedicate my scholarship, research, and teaching to identifying barriers to societal action on climate change and possible strategies for overcoming these barriers.