Below is the written dialogue between Robert Tracinski of the Federalist and Justin Gillis of the New York Times. No author of surrealistic fiction could have done better.
“When I challenged him about the ‘hottest year on record,’ a New York Times reporter explained that his readers are too dumb to understand numbers,” Robert Tracinski.
I recently wrote about the wretched reporting on the claim that 2016 was the “hottest year on record,” using as my main example a New York Times article by Justin Gillis that gave his readers none of the relevant numbers they could use to evaluate that claim. None of them. If you search for the actual numbers, you will eventually find that the effect they are claiming, the actual amount by which this year was hotter than previous years, is smaller than the margin of error in the data.
Shortly afterward, I got a revealing response from Gillis. I’ll fill in all the details for you, because the whole thing is an important case study in why you can’t trust mainstream reporting on global warming. But let’s just cut to the chase. When I asked him why he didn’t include the basic numbers we need to understand his story, he gave me this reply:
I don’t believe this for a minute, and not just because I’ve lived through 30 years of New York Times readers telling me how terribly intelligent and sophisticated they are. The newspaper actually does have an educated audience, and more to the point, if its readers lack knowledge on a subject, the reporters are there to analyze the issues and explain them. That’s supposed to be their job.
But this exchange with Gillis started with him telling us that he doesn’t think it’s his job. As far as he’s concerned, the data is somebody else’s department. He points out that there was also an “infographic” associated with the article—prepared by and credited to somebody else—and that if we cared to peruse that, we could “positively drown yourself in numbers.”
In this infographic, we get a plot of monthly temperatures, with each dot representing a different month, going all the way back to 1880. Only six months out of the entire 137-year history are individually labeled, only two of them since 1990—February and March of 2016, which represent the tail end of a strong El Nino, a naturally occurring, temporary warm cycle. From that graph, could you actually reconstruct any meaningful data? Could you reconstruct averages for one year versus another, even approximately?
Don’t get out your ruler, it’s a rhetorical question.
The other graphic is even more useless for our purposes. It represents monthly temperatures as spirals emanating out from the center of a circle and overlapping on top of each other, making it even harder for anyone to separate out one year from another or discern the exact amount of difference
And where are the error bars? It is common for scientists to represent the range of error in their measurements by presenting a measurement not just as a single point, but as a bar covering an entire range. Not just “1.04 degrees,” but “somewhere between 0.94 and 1.14 degrees.” Every scientific measurement has a limit to its precision, based on the instruments and methods that are used. For a long time, temperature measurements were collected, not by some super-precise digital apparatus, but by having human beings walk up to a thermometer and visually read off the temperature from it and write it down. The size of the thermometer, the limits of human eyesight, and differences between individuals—one person might be more scrupulously precise than another—all mean that you have to make allowance for an inherent inaccuracy in the measurements.
Yet in that first New York Times graph, the monthly temperatures are represented by tiny little circles that represent a range of perhaps a few hundredths of a degree—much, much less than the actual margin of error in the data. This conveys a sense of false precision.
A graph is not the same thing as data. It is a picture of data. It’s easy to draw that picture in a way that is impossible for the reader to translate back to actual numbers, or in a way that is misleading. For example, by adjusting the scale on the graph, it’s easy to make small differences look big. You can make hundredths of a degree, which are statistically meaningless in this case, look like they really mean something.
Pushed a little on this, Gillis conceded that “there is no one number” for last year’s average global temperature, because it “depends on which of the five datasets you care to inspect,” and he went on to point to other complications. So because there are a lot of numbers that he could have presented, he decided to give us none?
This is, pretty obviously, a dodge. His original article did not tell us that the numbers are complicated and that they vary depending on who is collecting the measurements. His original article simply hyperventilated about how amazingly hot it is. All the complications are just his fallback position when challenged.
I agree that the data is complicated. If you really want to dig into it, you have to look at things like this.
But you, John Q. Public, should not have to wade through all of that. As I put it in my exchange with Gillis: “There’s a lot of data, and it’s complex? If only there were people whose job is to explain data to the public.” Those people are called “science journalists.” Or would be if there were any.
So let me take a moment to do Gillis’s job for him and present and explain a little of the data to you.
In my previous article, I already pointed to the one set of data that was actually reported more or less properly, with straightforward numbers and a margin of error. The numbers from the British Met Office (Meteorological Office) were reported on the same day as Gillis’s article and showed a difference in average temperature between 2015 and 2016 of 0.01C and margin of error of 0.10C, ten times larger. So the accurate headline is not “2016 Breaks Record for Hottest Year Ever,” but “Last Year’s Temperatures Indistinguishable from Previous Year.” It is crushingly boring, but truthful.
Gillis’s report was supposedly about two different sets of numbers produced by NASA and NOAA. If you hack through this lovely table, you find that the difference between the two years in NASA’s GISS Surface Temperature Analysis is 0.12C. It’s slightly more (0.18C) if you use the “meterological year” that follows the seasons and goes from December to November. But that’s not what most articles were reporting, so let’s stick with the regular calendar year. If you dig through this FAQ—isn’t this fun?—you find that NASA claims a margin of error for recent measurements of plus or minus 0.05C and for older measurements plus or minus 0.10C. That’s a bit dubious, as I’ll explain in a bit, but NASA admits, in nicely passive bureaucratese, that “accurate error estimates are hard to obtain.” So there’s some margin of error in their margin of error.
The data from NOAA, the National Oceanic and Atmospheric Administration, is less dramatic. Last year surpassed 2015 by only 0.04C. I couldn’t find a clear labeling of the margin of error for this number, but a description from 2014 gives it as plus or minus 0.09C. It’s certainly hard to imagine that any of these numbers are remotely accurate enough to make 0.04C a significant difference.
Oh, and since we’re drawing from all these different sets of numbers, we might as well throw in measurements of temperatures higher in the atmosphere taken by weather satellites. For the satellite data, a set known as UAH (after the University of Alabama in Huntsville) shows no particular warming trend for a very long time.
Roy Spencer reports that the difference in satellite measurements between 2016 and 1998—the year of the last big El Nino warm cycle—is only 0.02C, within a 0.10C margin of error. Another satellite data set, RSS, confirms this result.
The comparison to 1998 is particularly important, because if the headline is that this year is not significantly hotter than temps 19 years ago, that take a lot of wind out of the “climate change” hysteria. It means we’re not seeing the runaway takeoff in global temperatures that the global warming theory predicted. As Judith Curry has been pointing out, recent temperatures are actually at or below the bottom range for all of the global warming predictions. That is the relevant context for this story, the failure of the data to match the theory, not some infinitesimal difference between this year and last.
Moreover, there is good reason to think that the margin of error in this temperature data is much larger than claimed. NASA and NOAA frequently “adjust” the temperature data to make up for changes in the way it is gathered. Somehow, when this happens, the data always adjusts to fit the theory, but the theory never adjusts to fit the data. The size of these adjustments is often larger that one tenth of one degree. But as Richard Lindzen puts it, “If you can adjust temperatures to two tenths of a degree, it means it wasn’t certain to two tenths of a degree.” So the margins of error are probably a good deal larger than advertised.
Gillis is right. There are a lot of different sets of data, and the issue is complex. So why didn’t he explain any of that complexity to readers of the New York Times? Because complexity leaves room for doubt, and on this issue, doubt cannot be permitted.
Complexity leaves room for doubt, and on this issue, doubt cannot be permitted.
Speaking of which, you’ll notice that I just quoted Roy Spencer, Richard Lindzen, and Judith Curry. Who are these people, just some crazy bloggers? Enemies of science? Dr. Spencer is a former NASA climatologist and now a principal research scientist at UAH. Dr. Lindzen is emeritus Alfred P. Sloan professor of meteorology at the Department of Earth, Atmospheric, and Planetary Sciences at MIT, and Judith Curry was, until her retirement just a few weeks ago, chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology.
A science journalist interested in an accurate, balanced assessment of the temperature data might talk to and quote people like this. The New York Timesand other big mainstream media organizations long ago adopted an explicit policy of blacklisting these dissidents.
So if these media reports don’t provide data and explain the numbers, if they don’t give us a range of interpretations from different scientists, what do they do instead? Or to put the question differently, what exactly is Justin Gillis’s job?
It’s stuff like GISS head Gavin Schmidt telling us that “this year was ridiculously off the chart,” or NOAA’s Deke Arndt telling us “We’re punching at the ceiling every year now, that is the real indicator that we’re undergoing big changes.” Even the text for the infographic sidebar is full of this sort of thing, such as Penn State’s Michael Mann assuring us “We expect records to continue to be broken as global warming proceeds.” And, “One could argue that about 75 percent of the warmth was due to human impact.
In short, a New York Times reporter’s job is to repeat the talking points of government agencies and transcribe quotations from partisans for one side of the scientific and political debate. Gillis refers to this as “a 1970s journalism model,” as if that’s supposed to reassure us, but there’s another name for it. It’s “press-release journalism”—journalism that consists, not of questioning or investigation or skepticism, but of restating partisan press releases. It’s the lowest, laziest form of journalism.
”Notice the result for the reader. All we get are broad statements telling us what overall conclusions we are supposed to draw about global warming, with no attempt to present or explain the actual data so we can judge the issue for ourselves. Now we can really understand the full meaning of Gillis’s assertion that his readers are too dumb to understand numbers.
This translates to: we think you’re all a bunch of idiots, and it’s our function to tell you what to think.Follow Robert on Twitter.
That, unfortunately, is all you’re likely to get these days on the subject of global warming from the mainstream media.