I wrote an article back in 2010 called “Hell, Fire, and (Global) Warming” in which I showed the similarities between the type of climate science evangelism practiced by Al Gore and the beliefs of certain sects of Christianity (though the comparison could be made with just about any major world religion). The idea I was trying to get across was that for many, climate science has taken the psychological and emotional role that religion has held (and still does for many). I think this thesis has been proven time and again over the years since I wrote that article. (It’s worth noting that Philosopher Michael Ruse recently explored the idea that the broad label “Darwinism” has functioned, for many, in the role of religion in modern culture. Here’s a link to a review of his book.) I’ve wondered since then about an important concept in argumentation called falsification and how that functions in climate science. I read a couple of arguments recently that brought this idea back to the surface.[1]
Article Summary
In what follows, I will briefly summarize the general themes in this article. Below, I provide more detail for readers interested in going deeper.
The claim, ‘the planet is catastrophically warming and that warming is caused primarily by humans’ has gotten to a place culturally where holding even a moderately different position is treated as something nearing intolerance. Is this warranted? Are the reasons why this has happened due to solid evidential and argumentative grounding or more social force and cognitive bias? To answer these questions, we can test the rationality of the claim by determining whether there are criteria by which it can be shown to be false. In other words, we should determine whether the theory is subject to falsification. Falsification is the idea that for any rational claim, there are clear ways that the evidence and arguments that support the claim can be shown to be false otherwise the rationality of the claim should be called into question.
Antony Flew developed a criterion of falsification and applied it to religious belief and Karl Popper developed criteria that he says should be used in scientific theorizing. Flew and Popper are similar in that their falsification models apply to a theory’s conclusion—the broad claim being made (like, “God exists” or “Objects fall to the earth in a vacuum at 9.8 m/s/s”). One critic argues that for complex theories like global warming, this model of falsification is too “course grained.” Rather, we should consider the Duhem-Quine thesis which says that a complex hypothesis may not stand or fall on direct empirical evidence alone but is the product of a ‘bundle’ of hypotheses that all have their own confirming empirical evidence.
The Duhem-Quine thesis is a part of a larger logical framework called “underdetermination” which focuses on the complexity involved when attempting to determine if or why a theory fails. Sometimes, a theory may be so complex that if it appears not to be supported by the evidence adduced in its favor, it’s very difficult to determine which bit of evidence or which sub-theory that supported it doesn’t work and so is the culprit. In other scenarios, the evidence used to support one theory can be used to support another—sometimes opposing—theory. In both cases, the theory is considered underdetermined. Because underdetermination is a very real challenge in complex theories, being non-dogmatic and open to alternate views (as well as developing falsification criteria for the essential sub theories and evidence) becomes even more important if we want to avoid irrationality.
But some may argue that catastrophic global warming is so widely accepted by so many scientists that it has achieved the status of being out of reach of critics for rational people—sort of like “flat-earth theories.” Robert McKim provides us with a set of criteria that helps us evaluate this claim. He says that if only a single person who is wise, thinks carefully and judiciously, who is intelligent, clever, honest, reflective, and serious, who avoids distortion, exaggeration, and confabulation, who admits ignorance when appropriate, and who has relied on what have seemed to them to be the relevant considerations in the course of acquiring his or her belief has an opposing view of the theory, then that opposing view is worth considering. This does seem to be true of catastrophic global warming so shutting down the conversation because it supposedly isn’t worth considering does not seem to be warranted.
Given the Duhem-Quine thesis, coming up with a falsification criterion for catastrophic global warming may not be possible or even desirable (though some have tried). But it does mean that much more work needs to be done to provide a better understanding of all the sub-theses that constitute the grounding for the macro thesis and determining the falsification criteria for those and how they contribute to supporting the macro thesis. Regardless, for any complex theory like global warming, we must at least adopt of position of defeasibility—the idea that we could be wrong about what we believe about it and be open to other points of view. This is just good epistemology.
It’s Snowing so the Planet Can’t be Warming!
In what follows, I go deeper into each of the points made above. While I attempt to explain the philosophical jargon, and use examples to make ideas clearer, some background in philosophy will help.
Recently I came across two articles that called out the ideological nature of the “discussion” on climate change. I’ll ask that the reader focus less on the source, and more on the arguments made by the source. These articles are not scholarly by any means and the sites and authors featured on these sites may easily dismissed by people that disagree with them. But my focus is on a specific logical aspect the articles call out and not the entire ideology and place in the cultural taxonomy that the sites and authors hold. Before I get to the articles, I want to clarify what claims I want to focus on. I take it as a given that the climate is undergoing change. But I also take this as a trivial claim as the climate is constantly undergoing change and has for as long as the planet has been around. I also take it as a given that humankind is a contributing factor because we must be—we’re a part of the global ecosystem and so I take it as uncontroversial that humans are impacting the planet to some degree. So, the claim of anthropogenic climate change should be fairly uncontroversial. The claim this article is focused on is that humans are the cause of irreversible, catastrophic, global warming (I’ll shorten this to CAGW). Much of the public rhetoric is about this more severe claim and it is here I want to give some attention.
With that background, let me summarize the articles that got me thinking about this topic.
The first is by apparently noted “climate denier” Marc Morano. In his article “Al Gore: ‘Bitter cold’ is exactly what we should expect from the climate crisis”, he attempts to point out that regardless of what the weather is doing, Al Gore has a tendency to say, “that’s exactly what the climate science says should happen.’ In the winter of 2017-2018, much of the eastern United States was socked in by cold weather and tons of snow. Other parts of the world experienced uncommon snow and cold including a severe storm in Ireland and other parts of Europe in what was dubbed “The Beast from the East.” The obvious response by those who deny the critical nature of global climate change is to use the current weather as evidence that the planet is not suffering the ill effects of warming as the climate evangelists say it should. Morano cites a 2000 UK Independent article in which author Charles Onians claims that “snow is starting to disappear from our lives” arguing that global climate change will increasingly diminish the number of “white Christmases” and fewer “white Januaries and Februaries” Britains will experience. The author then notes (sarcastically) how Al Gore cites climate scientist Michael E. Mann’s assertion that the cold, extreme weather is exactly what should be expected given the change in climate around the world. The implication is that regardless of what the weather is doing, climate change evangelists will claim that this is exactly what is expected.
The second article, “The Link Between Climate Change And Those Crazy Snow Day Photos On Your Instagram Feed” by Isaac Saul also talks about the incredible snowfall that occurred in late January of 2017 attempting to explain the phenomenon in light of global warming. Like the Marano article, Saul quotes Michael Mann as well (though apparently favorably) to explain how climate change is contributing to the heavy snowfall. Admirably, Saul quotes another scientist Ryan Maue with weather.us who provides an alternate view of Mann’s and some competing data that doesn’t mesh with Mann’s assertions. Saul then writes, “Presented with Maue’s commentary, Mann pushed back. “Maue is a climate change contrarian and I find it difficult to take him seriously,” Mann said in an email to A Plus. “A typical refrain from such types is, ‘oh but it is so complicated’, i.e. the standard trope that because we don’t know everything, we know nothing. The theory and data here are actually pretty clear.” We’ll see the importance of this shortly but it’s worth noting that Mann appears to be so confident of the truth of the accepted climate science that he can dismiss Maue with a couple of informal fallacies and doesn’t appear to linguistically blush doing it.
As I read these articles (and others like them), I was reminded of criticisms I read in college of religious believers that argued that there are no conditions under which a religious belief could be shown to be false so religious belief is not rational. Like many ideologies, a particular perspective on climate change has become cultural dogma and holding even a moderately different position is treated as something nearing intolerance in polite circles. I think this is unfortunate and leads to destructive behaviors in terms of the impact to our cultural conversation. These bad behaviors show up in a lot of areas. I’ve mentioned religion but you see the same kind of dogmatism in conversations about gender, sports, certainly in politics, and even in technology. In each case we see the same bad behaviors as a result. Say what you will about climate deniers (or atheists, or gender traditionalists, or whatever), the minority voice is rarely tolerated. This has led to a kind of rhetorical protectionism. In other words, the dogma itself has created a hedge around even the possibility of the accepted theory being falsified. The criteria that was being required of these beliefs is called “falsification.”
What is Falsification?
Falsification is a property that applies to a theory such that, if the theory is rational (it was formed on or supported by reasons and evidence that demonstrate its truth), there should be clear criteria by which the theory can be shown to be false by showing how the supporting rationale is defeated or undermined. That is, analysists of a theory should have, at least in principle, a clear idea of the truth conditions on which the reasons that support the theory are based. If those conditions are not met or somehow shown not to support the theory, the theory either is not based on evidence and argument or the evidential and argumentative grounding for the theory doesn’t support the theory in the right way. For my purposes, this doesn’t mean that the belief can be dismissed as there are a lot of relevant beliefs that are not based on evidence and argument. What is important is that many beliefs are presented as rational even though they lack clear criteria for falsifiability and that should call into question their rationality.
For example, if a scientist claims that objects always fall to the earth in a vacuum at 9.8 meters per second squared and she bases this claim on experiments that demonstrates this fact, she may come up with the following falsification criterion: if subsequent n number of experiments performed by other scientists are not able consistently to demonstrate the same rate of decent using the exact same conditions used in my experiments, then objects do not always fall to the earth in a vacuum at 9.8 meters/sec/sec.
It’s worth noting that on this view of falsification, for a truth claim to be subject to falsification, the evidential grounding for the belief need not be reproducible. It just has to be grounded on evidence or arguments that can be shown to be false. There are myriad theories in science where a lack of reproducibility might falsify the theory (or at least call either the theory of the method used to ground the theory into question) but this need not be the case (see “Falsification: How does it relate to reproducibility” by Brian D. Earp for a perspective on this topic). This point is especially important for the subject of climate change.
A widely-read work on falsification in philosophy is the (very) short paper “Theology and Falsification” by Antony Flew first published in 1950 but reprinted many times since. While the focus of this paper obviously is religious belief, the principles of falsification can be applied to any subject whatever and it is this broad application which interests me here. Flew defines the essence of his argument with this statement:
For if an utterance is indeed an assertion, it will necessarily be equivalent to a denial of the negation of that assertion. And anything which would count against the assertion, or which would induce the speaker to withdraw it and to admit that it had been mistaken, must be part of (or the whole of) the meaning of the negation of that assertion: And to know the meaning of the negation of an assertion, is as near as makes no matter, to know the meaning of that assertion. And if there is nothing which a putative assertion denies then there is nothing which it asserts either: and so it is not really an assertion.
Flew’s contention in this article is that any assertion of truth has, embedded in it, a denial of the negation of that truth. If Miranda asserts, A: “Water consists of two hydrogen atoms and one oxygen atom” she is, according to Flew also asserting, B: “It is not the case that it is false that water consists of two hydrogen atoms and one oxygen atom.” If she says that she is asserting A but not asserting B, Flew says no sense can be given to the claim that she is asserting A. For making a claim of truth is also to deny the falsehood of that claim. In replying to two responses to his paper, he attempts to clarify his position:
The challenge, it will be remembered, ran like this. Some theological utterances seem to, and are intended to, provide explanations or express assertions. Now an assertion, to be an assertion at all, must claim that things stand thus and thus; and not otherwise. Similarly an explanation, to be an explanation at all, must explain why this particular thing occurs; and not something else. Those last clauses are crucial. And yet sophisticated religious people-or so it seemed to me-are apt to overlook this, and tend to refuse to allow, not merely that anything actually does occur, but that anything conceivably could occur, which would count against their theological assertions and explanations. But in so far as they do this their supposed explanations are actually bogus, and their seeming assertions are really vacuous (bold emphasis mine).
The phrases bolded above is, I think, the essence of a lack of falsifiability. In fact, noted philosopher of science, Karl Popper, in a well-known and widely cited paper, attempted to distinguish, “between science and pseudo-science” and used falsifiability as the essence of that distinction. In his 1963 paper, “Science as Falsification” he offers seven “conclusions” by which we may evaluate a scientific theory as truthful or acceptable. All seven are worth noting here but we’ll concentrate on two of them:
-
It is easy to obtain confirmations, or verifications, for nearly every theory — if we look for confirmations.
-
Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.
-
Every “good” scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
-
A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
-
Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
-
Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of “corroborating evidence.”)
-
Some genuinely testable theories, when found to be false, are still upheld by their admirers — for example by introducing ad hoc some auxiliary assumption, or by reinterpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a “conventionalist twist” or a “conventionalist stratagem.”)
All these principles are worth remembering but principles 4 and 5 establish a criterion of falsifiability in science and apply Flew’s principles to this discipline. While Popper doesn’t say it explicitly, the implication of principle 4 is that any scientific theory should have a clear criterion or criteria by which it can be falsified. Principle 4 isn’t possible without this for it wouldn’t be possible to refute a theory if one is not clear on which event (or events) would have to happen for it to be falsified. The same is true for principle 5. Tests require conditions under which the test passes and fails. A lack of both conditions, says Popper, changes a theory from scientific to non-scientific.
He’s clear that falsifiability need not be more than an in-principle falsifiability with clear conditions stipulating when a theory fails the test of truth even if those conditions cannot be met for some reason. He writes, “Einstein’s theory of gravitation clearly satisfied the criterion of falsifiability. Even if our measuring instruments at the time did not allow us to pronounce on the results of the tests with complete assurance, there was clearly a possibility of refuting the theory.” And this is really all that is required. Having the criterion keeps the theorist honest—it means the scientist has skin in the game; that he or she knows their theory is falsifiable and provides the conditions under which its truth value can be shown to be false.
Soothsaying or Science?
Popper warns of an over-reaching desire that one’s theory is true, that it devolves into non-science. Theorists can become so misled by the psychological desire and radical investment in a theory that they become “quite unimpressed by any unfavorable evidence.” While this may not be true of climate science in the halls of research Universities or laboratories, it seems that is at least at risk of becoming so in the public discussion of the climate. He famously relates what might be an analogous situation in politics:
The Marxist theory of history, in spite of the serious efforts of some of its founders and followers, ultimately adopted this soothsaying practice. In some of its earlier formulations (for example in Marx’s analysis of the character of the “coming social revolution”) their predictions were testable, and in fact falsified. Yet instead of accepting the refutations the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. In this way they rescued the theory from refutation; but they did so at the price of adopting a device which made it irrefutable. They thus gave a “conventionalist twist” to the theory; and by this stratagem they destroyed its much-advertised claim to scientific status.
The idea is clear: when one moves from a scientific mode to a ‘soothsaying’ mode, that person will attempt to re-interpret evidence that contradicts their theory to one that confirms it. It’s a move from persuasion by argument and evidence to persuasion by psychological and rhetorical jerry-rigging. Flew argues that this isn’t allowed in religious truth claims and it certainly shouldn’t be allowed in the sciences (as Popper establishes).
For many scientists, these principles should seem obvious and there are a great many of scientists who regularly operate under most if not all these principles. In fact, scientists as a rule proudly proclaim the distinction of their discipline largely based on their tenacious adherence to these principles. (For a good example of this, listen to Richard Dawkins’ engaging anecdote of an experience he had in college where these principles were exemplified (section ends at 14:17). Dawkins is visibly moved as he relates the story.) What about climate science? Are climate scientists still solidly working in the ‘scientific mode’ or have many moved into ‘soothsaying mode’ seeking to justify the idea that the planet is warming beyond repair at any cost? Well, this is philosophy so we have a responsibility to say: the answer is not so simple.
The End of the World as We Know It
I’m making a distinction between climate science as a science and the public presentation of climate science which may have very little to do with the science itself. This article is focusing on the latter, and though many rhetoricians present their claims as being grounded on climate science (and many perhaps even believe their claims are in fact so grounded), I’m exploring whether the rhetoric reflects any underlying science or if it has evolved into something disconnected from it. While it’s true of just about any science that the data that support specific scientific claims and the public or political representation of those data almost always involves a fair degree of translation at best but also a significant degree of interpretation and applied meaning.
Here’s an example of this. In the late 90s, stories surfaced about a little-known concern in some corners of computer science that a significant number of computers around the world suffer some version of a class of problems related to the way computers store date information (known as the Y2K bug). By the end of 1999, the hype had far exceeded the science with many preparing for disaster: power grids shutting down, banking systems collapsing, and far worse—all ostensibly based on good science. Many billions of dollars were spent just in Y2K ‘fixes’ not including all the survival gear and preparatory equipment that were purchased by individuals and businesses. Even books were published predicting the end of civilization due to the bug and the rhetoric, fueled by the popular press, had permeated every aspect of Western society. The new year came in with only minor problems and the whole issue was regarded as a massive hype campaign even though the underlying science had identified some real issues that needed to be address. Examples like these may partly be why the issue of falsification may have less to do with the science itself and more to do with the popular rhetoric that is using aspects of climate science to drive an agenda or create personal or corporate wealth or a host of other motivations that have little to do with being true to the actual claims the science is making.
Let’s now return to the topic at hand. How do Popper and Flew and the idea of falsification apply to the science of climate, particularly CAGW?
Contra Popper
An 2014 article in the popular philosophy magazine, Philosophy Now, author Dr. Richard Lawson, a self-described “environmental activist” addresses this question of falsifiability. In his “Climate Science and Falsifiability” he takes up Popper’s criteria head on and argues that Popper’s criteria gives us “a key” that can help us unlock the truth about climate change. Unfortunately, Lawson chooses not to directly address the falsifiability of the positive claim that CAGW is the happen. Rather he characterizes that skeptics of CAGW are asking for definitive proof of the phenomenon which, he says (rightly I think), does not exist in science and then attempts to turn the tables, set up a falsifiability criterion for the skeptical argument, and show how the evidence meets the criterion.
A better and more interesting dialogue on the subject is one taking place at Skeptical Science. The site, run by John Cook, a research assistant professor at the Center for Climate Change Communication at George Mason University has a self-described mission, “to explain what peer reviewed science has to say about global warming.” It does not claim to take an objective stance on CAGW (the site’s tagline is “Getting skeptical about global warming skepticism”) but it does position itself as entirely open to go where the science leads. In a discussion thread titled, “Global warming theory isn’t falsifiable“, the respondents tackle this skeptical response and while there is some banter, the majority of the participants seem to be engaging in a reasonable discussion.
While the site’s mission appears to be aimed at making the minutiae of climate science accessible to a general, non-scientific audience, the discussion does get quite deep into aspects of the science that only professionals reasonably can parse and much of the discussion is beyond the expertise of even the most ardent non-professional—including me. Still, there are some posts that do shed some light on the question of falsifiability that are important to consider in this paper. One post by author Tom Curtis is critical of the course-grained version of falsifiability I’ve presented here and this author’s criticism is instructive (based on his name alone, I’ll use the masculine pronoun to refer to this author but I’ll acknowledge that I know nothing about this person and there are no links I could unearth to find out more). In this post, the author claims that Popper’s falsifiability criteria as popularly conceived is “of limited use in science, despite its popularity.” The problem, he notes, is that in a complex theory like CAGW, there are too many background assumptions and inter-dependent hypotheses to consider to make a broad claim about the falsifiability of the theory as a whole.[2] Instead, we must consider the Duhem-Quine Thesis in cases like these.
The thesis rests on the idea that a complex hypothesis may not stand or fall on direct empirical evidence alone but is the product of a ‘bundle’ of hypotheses that all have their own confirming empirical evidence. The thesis is a part of the category of concepts that are categorized as scientific underdetermination. Kyle Stanford writing for the Stanford Encyclopedia of Philosophy on the subject makes a distinction between holist underdetermination and contrastive underdetermination. We encounter holist underdetermination when we’re unable to test an isolated, single hypothesis that is a part of a bundle and then attribute the failure of the macro hypothesis with a failure of a sub-hypothesis that is a part of the bundle. We can use the Y2K hypothesis as an example (this is a made-up scenario). Suppose the macro-hypothesis is: at midnight, January 1, 2000 many critical computer systems will fail due to their inability to deal with the two-digit year causing widespread failure to critical services. A critic may claim that to validate or invalidate that claim, one must understand the evidence supporting a sub-thesis that is a part of a bundle of theses supporting the macro-thesis: date-time values in most critical computer systems are computed using the atomic clock at the National Institute of Standards and Technology in Boulder, Colorado. This sub-thesis may be impossible to determine with any accuracy due to the large number of computer systems needed to be tested. When the macro-thesis fails (as it did), a supporter of the Y2K thesis may blame the inability to test the sub-thesis as the reason for the failure. Since the sub-thesis can’t be determined with any accuracy, it’s not possible to say with any confidence why the macro-thesis failed. So, the macro-thesis is underdetermined.
Contrastive underdetermination is “quite different” says Stanford and is the claim that for any complex theory, there may be other theories that are confirmed using the same evidence—and those alternates don’t have to agree with the conclusion of the original theory. Using our Y2K example again, I remember hearing on the radio in late 1999 a theorist likening the problem to a pool full of marbles where each marble represented a computer that needed to be fixed and the impossibility of addressing each and every marble before the year 2000 rolled around. That failure would result in catastrophe. Ostensibly the evidence he had for the claim is that the millions of marbles—computers—used operating systems with the same bug so they’d all be subject to the same problem. My experience as a computer engineer immediately came up with another thesis: not all marbles are created equally. My analogy was that the pool is filled with marbles of all sizes: some are very big and others extremely small. The big ones are the most critical systems and the small ones are non-critical. The problem computer programmers faced was to fix the biggest ones immediately and address the others over time. This is what ended up happening. The most critical systems with the bug were patched well before the new year rolled around and other systems were patched after the new year but didn’t affect critical services. The point is that both theories used the same evidence but were diametrically opposed in terms of their predictions.
Stanford is clear that we really don’t even need a real alternate hypothesis to label a theory as contrastively underdetermined. We just have to agree that in complex theories, it’s possible that alternate theories can be justified on the same evidence. He writes,
Notice, for instance, that even if we somehow knew that no other hypothesis on a given subject was well-confirmed by a given body of data, that would not tell us where to place the blame or which of our beliefs to give up if the remaining hypothesis in conjunction with others subsequently resulted in a failed empirical prediction… even if we supposed that we somehow knew exactly which of our hypotheses to blame in response to a failed empirical prediction, this would not help us to decide whether or not there are other hypotheses available that are equally well-confirmed by the data we actually have.
This is the point of underdetermination. But even if we do accept that any complex theory is underdetermined in this way, does this entail that we must be skeptical about the macro theory? I take this up in a roundabout way below but the short answer is no. We do seem to be within our epistemic rights to hold to a theory that appears to be justified by all the available evidence at a given time: “until we are able to actually construct an empirically equivalent alternative to a given theory, the bare possibility that such equivalents exist is insufficient to justify suspending belief in the best theories we do have.” But it does have implications for falsifiability, particularly when we’re dealing with complex theories like those supporting climate science. The implication is this: we should at least be willing to consider alternative theories (or the possibility of the existence of such theories) that might fit the available evidence. The history of scientific disconfirmation with a theory being replaced by an alternate, sometimes radically different, theory, “the history of scientific inquiry itself offers a straightforward rationale for thinking that there typically are alternatives to our best theories equally well confirmed by the evidence, even when we are unable to conceive of them at the time.”
Alternate Points of View Are Not Worth Considering
This semi-skeptical implication doesn’t sit well with some philosophers and may provide us with another reason why falsification may be absent from the conversation. Let’s explore an important rhetorical position that bears serious consideration. A very reasonable response the question of whether we should consider alternate theories is that there are some alternates that are so preposterous that ‘falsification criteria’ no longer apply. The belief that the earth is flat, for example, has been so conclusively shown to be irrational that reasonable people are no longer obligated to give it equal time as a viable theory. This is true. Not every alternate theory deserves a seat at the table. Many theists and atheists believe the same about theism and many ardent evolutionist (Richard Dawkins comes to mind as someone that has stated this publicly) believe the same about evolution. The claim seems fair and appropriate about many topics. But I also don’t believe one can, by editorial fiat, just claim some issue has been achieved this level of “scientific dogma” and so alternative views no longer need to be considered. Is it settled that there is (is not) a god? Is it absolutely irrefutable that there is (is not) life on other planets? Is the idea that consciousness is (is not) fully reducible to brain activity really the only reasonable view we can take? Is it finally settled that anthropogenic global climate change is (is not) irreparably damaging the planet?
In order to provide some guidance for this problem, I’m briefly going to turn to ideas developed Robert McKim. As with Flew, McKim developed his ideas in the context of religious belief but I think they can be applied more generally to any subject about which there are differing views. In part II of his book Religious Diversity and Religious Ambiguity McKim attempts to address how people with differing views on religion ought to discuss the topic of religion given that the world is religiously diverse and ambiguous. In the context of that discussion, he develops a theory of the ‘ethics of inquiry’ in which he establishes two principles. These principles (which he calls the E and T principles) provide the framework for how one ought to think about the ideas of a person that has a differing viewpoint. Here are the principles:
The E-principle: Disagreement about an issue or area of inquiry provides reasons to think that each side has an obligation to examine beliefs about that issue.
The T-principle: Disagreement (of the sort under discussion) about an issue or area of inquiry provides reason for whatever beliefs we hold about that issue or area of inquiry to be tentative.
These two principles apply within certain constraints. One constraint is that the beliefs for which these principles hold have to be beliefs that are held by people with integrity and who possess the following virtues:
wise people who think carefully and judiciously, who are intelligent, clever, honest, reflective, and serious, who avoid distortion, exaggeration, and confabulation, who admit ignorance when appropriate, and who have relied on what have seemed to them to be the relevant considerations in the course of acquiring their beliefs (p. 129).
Together, the two principles constitute what McKim calls The Critical Stance. He says that the critical stance isn’t a plausible option for all people. Rather the stance applies only to those he says are privileged and would be able to see the thesis as plausible. A privileged person is one who has the following properties: being adult, well educated, well-informed, has opportunities for reflection but also would include those who are not in this situation but cannot reasonably be held responsible for not being so (p. 149).
The idea here is that reasonable people, when considering their own viewpoint and the viewpoint of others, should determine whether alternate points of view are held by ‘privileged persons.’ If the answer is yes, one should adopt the critical stance. If not, then the viewpoint may not warrant further inquiry. Of course, determining whether there are privileged persons that hold alternate points of view can be difficult but his argument is that we avoid dogmatism and ideological entrenchment by making the effort to make that determination.
I think McKim’s criteria is reasonable and, while I think there can be debate about what he presents here, for the purposes of this article, I accept his criteria as a good measure for whether a position on a topic should be considered by reasonable people. So where does climate science land on McKim’s criteria? Is there enough controversy on the subject to warrant a conversation? On the surface, there does seem to be enough alternative viewpoints to the accepted narrative to warrant a conversation. Even the literature on this makes significantly opposing claims. Wikipedia has assembled a list of scientists that are in the camp of (according to the title of the article) “opposing the mainstream scientific assessment of global warming.” For those that mistrust Wikipedia (or, for very good reasons, need other sources), there are others: National Association of Scholars article “Estimated 40 Percent of Scientists Doubt Manmade Global Warming” and a 2012 article by Organization Studies and this one by the conservative US magazine National Review. The most extensive list that I found on “skeptical” views on the accepted narrative has been compiled by Pierre L. Gosselin at NoTricksZone.com. Gosselin has compiled hundreds of academic papers from 2014 to the present that discuss alternate narratives on CAGW (see right sidebar for “Skeptic papers” at NoTricksZone for more information).[3]
Alternatively, the “97% of climate scientists agree” claim has been around for years and papers appear with a fair amount of frequency attempting to justify the claim. A paper by IOPScience seems to summarize a lot of the research being done to support this number. It’s not my purpose to explore the differing claims or to settle on whether McKim’s criteria conclusively have been met for this issue. I’ll leave that to the reader to sort out for himself or herself. I do believe that McKim’s criteria, if not exhaustive, is good enough to establish when an open dialogue needs to take place and, for the sake of argument, I will claim that there does exist at least one climate scientist who fits McKim’s criteria of a privileged person and who does not agree with the consensus. This is enough to justify, in my view, not equating alternate views on climate science with pseudo-scientific claims like flat-earth theory.
Immune to Falsification
So, are there falsification criteria for CAGW? If we take the Duhem-Quine thesis seriously and apply it to the problem of CAGW, then it should be clear that this question may be a non-starter. We wouldn’t be able to find a “magic bullet” criterion or set of criteria that would falsify CAGW. Rather, we’d need to develop falsification models for some of the myriad sub-theses that support the general claim that catastrophic global warming is occurring on this planet. The body of research on climate change is massive and sorting through that research to find what we’re looking for is beyond the scope of this paper (and given that climate science is not my area of expertise, I’ll ask my readers who are better read in this area to comment on this and point me to papers or research where such criteria is discussed).
However, I did some modest poking around on the topic to see if anyone did attempt to provide such a criterion and the search did turn up some interesting results. Very few of them articulated a falsification criterion that would be convincing to all but the already-convinced. But one exemplar is worth mentioning here. A user of the popular question and answer site Quora posted this question: “A scientific theory or hypothesis must have falsification criteria to be considered valid. What is the falsification criteria for dangerous global warming?” I like the way the question is worded because it fits squarely within the scope of this paper. The reader is not asking about climate change itself but global warming of the “dangerous” or catastrophic kind. Among the answers to the problem, respondents range from self-described “not a physicist” to “climate blogger” to “philosopher” and someone who has worked in the climate science for years (one responded even posts a helpful link that helps you charter a boat in Miami!). As of this writing, of the respondents only one links to what can be called a credible scientific site that may actually answer the question.
The link points to a PDF published by NOAA (National Oceanic and Atmospheric Administration, an agency of the US Department of Commerce). The paper, while dated (it references data from 2008), does appear to establish a kind of falsification criterion: simulations should show a lack of warming for 15 consecutive years in order to countermand expectations based on current assumptions. The paper itself is too interdisciplinary for the non-professional so I’m unable to sort out why the scientists who wrote the paper settle on this criterion. But that’s not really relevant. This criterion does allow for cooling trends for up to 14 years in a row (at least in the simulations) and still enable scientists to argue that AGW is occurring; whether CAGW is occurring is another matter of course.
Is this criterion adequate? Is this a scenario “97% of climate scientists” would agree with? What would the Duhem-Quineans say about this? These are questions I can’t even begin to answer. They’re very interesting questions for sure but perhaps even more interesting are these: If the criterion were met, if the planet showed a cooling trend for the next 15 years, would proponents of CAGW pack up their tents and go home? Would scientists then study “global cooling” and warn of new impending ice age? (Perhaps they’d encourage us to ditch our electric cars, buy filament light bulbs, and eat lots more beef!) Would CAGW proponents argue that this cooling trend is “exactly what the climate science says should happen.” No doubt we’d enter a new age of impending doom voiced by CAGC (catastrophic anthropogenic global cooling) proponents and have something new to fear. And, in like fashion, there would still be ardent “climate deniers” attempting to show how the planet actually is warming!
This is not cynicism. It’s a recognition that ideas seem rarely to exist in a state of underdetermination especially when those ideas can be used to exercise power or garner funding or capture the public interest. In the public square, ideas at the wings of the ideological spectrum seem to capture the most interest and end up in the news cycles. There are a lot of reasons for this. I’ll call out a few.
Memes and the Propagation of Ideas
Richard Dawkins has developed a science of “idea propagation” called Memetics. In it, he (and others—notably Susan Blackmore who has done some of the more solid work in this area), attempts to articulate a theory of how information is disseminated by non-genetic means. A meme, says the Oxford English Dictionary, is “An element of culture that may be considered to be passed on by non-genetic means.” Daniel Dennett calls it “a packet of information” where Dawkins himself defines memes as “patterns of information that can only survive in brains or the artificially manufactured products of brains—books, computers, and so on.” Dawkins argues that a given meme’s survival and propagation potential are subject to Darwinian pressures like genes but many do not survive on “absolute merit.” That is, a meme may not survive because it provides rational value but may propagate because it provides practical, emotional, or psychological value. In these cases, says Dawkins, the more irrational the meme is, the better the chance it may have of surviving.
The politics and rhetoric surrounding climate science (regardless of the position one takes) need not be rational for exactly these reasons. When the rhetoric functions in a memeplex—a collection of memes that are mutually supporting–it can provide one with a cause or purpose that gives meaning to life—something we all need. This was the point of the commentary about Al Gore’s position on climate change that I mentioned at the beginning of this article. We can find memeplexes like these in just about any area of life. But once entrenched in our psychology, they become a part of the fabric of who we are and become resistant to antibodies: falsification criteria.
Similarly, ideas can be subject to what economists call a “network effect.” According to Wikipedia, “when a network effect is present, the value of a product or service increases according to the number of others using it.” Applied to cases like the one we’re considering, an idea or memeplex can gain power simply by the sheer number of people that believe it as well as all the investments and commitments that people have made based on their belief in it. As with an individual meme, network effects need not center around products or ideas that have absolute or even rational merit. Older readers will recall the infamous technological war between Sony’s Betamax videotape format and JVC/RCA’s VHS format. Betamax was considered by many to be a superior technology. But factors like recording length and cheaper players enabled VHS to propagate more quickly and VHS “won” by sheer numbers.
This similar effect is playing out with the popular computer programming language JavaScript. Author Anil Dash considered network effects on the popularity of JavaScript arguing that JavaScript may become the de facto programming language simply by the sheer number of programmers and services that use it—even though other languages are superior. In these cases, falsification arguments never come into play because the basis for adopting a technology or idea may have little to do with truth and falsity and may have more to do with practicality.
Finally, humans are prone to cognitive biases. These are false beliefs can appear to be rational to the one holding them but the basis for how and why the belief is formed is flawed. There are many cognitive biases that contribute to the propagation of a memeplex or create a network effect that can make beliefs immune to falsifiability.
Epistemological Lessons
If you made it this far and are still reading, I hope you’ll indulge me a bit longer and allow me some brief epistemological observations. Falsification and related rational tools are an essential part, I think, of what it means to be a critical thinker. In any complex theory like CAGW or complex arguments that are supposedly based on reason admit of falsification or something like it. While there is a purely logical aspect to this idea (an aspect, for example, that might be called on in an academic debate or analytic exercise), as truth seekers, I believe we have an epistemic duty to acknowledge that we do have cognitive biases, that we interpret facts to fit those biases, and that we are unable to know and understand everything that might relevant when attempting to determine the truth of the matter. Looking for ways our cherished beliefs can be falsified forces us to acknowledge these things and can help us find those flaws in our thinking that lead to dogmatism and intractability in our beliefs.
This admission that we might be wrong about what we believe–what philosophers call “defeasibility”–is more essential now in this age of fake news and being bombarded with more information than we could ever hope to process. Defeasibility doesn’t need to end up in radical skepticism or even a lack of conviction about what we hold to be true. But it does slow things down and forces us to look at where we might be going wrong in how we formed and why we hold the beliefs we do. The truly critical thinker looks for the “privileged person” on the other side of the aisle to hear what he or she believes and what reasons they give for believing it. This is the ideal of “skeptical science” but also the ideal of critical philosophy.
Those disinclined to put any stock in these lessons might see them as Pollyanna or wishful thinking. “The world doesn’t work that way!” might be the retort. Politics, marketing, and activism are the only way things get done in this world and rhetoric, not reason, is the modus operandi of the modern world. This is certainly true but is it good? Ecology is a serious matter without a doubt. But political rhetoric uninformed by sound reasoning and fueled by fear and guilt mongering and can actually undermine genuine progress in this and other important areas that concern life on this planet. Maybe one way to inform this question is to start by buying a copy of Leonard Nimoy’s video documentary The Y2K Family Survival Guide to learn how to avoid calamity when disastrous events are upon us. And we should all get it on VHS.
Addendum: My Take on the Politics of Climate Change
My position is that the climate is changing but that it always has and always will and that mankind is a contributing factor because we must be—we’re a part of the global ecosystem and so how could we not be impacting the planet? This point of view is not grounded in or influenced by common claims like “Over 90% of climate scientists agree that anthropogenic global warming is happening.” (I don’t know what that means) or “It’s much warmer/colder in my state this season than it used to be.” (that doesn’t constitute the type of evidence relevant to concerns about climate change). I also believe that being good stewards of the planet is a moral good regardless of whether the climate is changing or it isn’t. Good stewardship is a principle that is a moral given for me and a principle we should apply to our management of a lot of things, the planet included. I also don’t believe fear and guilt are the right emotions to invoke to drive change towards better stewardship. Surfacing these emotions can have a powerful, short-term effect but don’t build in the long-term behaviors that lead to better living. The degree to which humans are impacting climate, whether that impact is reparable, and what should be done about it is not in my area of expertise. But if one takes a stewardship approach to this topic, having depth knowledge about the science of climate change becomes secondary. Using wisdom, common sense, and virtue is just good philosophy—and, in the end, good ecology.
[1] Before I comment on these articles, a brief qualification is in order. This is not an article about climate science. I’m not a scientist and while I’ve done a fair share of reading on climate science (including scientific papers), I don’t pretend to know enough about the details to be even close to proficient enough to make a judgement. This article is about observing some of the logic being used to discuss climate science in popular literature regardless of the position one takes. I do have a point of view as do most people who care enough to think about the matter do. I state my position at the end of the article for those who are interested.
[2] For a more in-depth treatment of the ideas Lawson proffers (and a critique of using the Popperian model to assist with this topic area), see the paper, “Why Popper can’t resolve the debate over global warming: Problems with the uses of philosophy of science in the media and public framing of the science of global warming” by David Mercer of the University of Wollongong. Mercer also outlines general methodological as well as social/legal problems with the way Popper’s principles are being used in climate science and communicated publicly. He notes, “[Controversial science policy debates in the United States] have cultural links to the AGWS [anthropogenic global warming science] debate and display some interesting similarities in the way Popper’s philosophy has proved to be flexible enough in practice to be used in politically expedient ways to support multiple competing positions.”
[3] For a more general critique of the philosophical methodology being used climate science, I refer the reader again to the paper by David Mercer.
For Further Reading
What is Skepticism? An important first step in acknowledging the need for a falsifiability criteria is a healthy skepticism about the views one holds on a given subject. In this article, Dr. Joseph H. Shieber examines what skepticism is and how it can be used in a healthy way to drive the inquirer towards more reasonable views. He provides a broad overview of the subject but also some practical advice on how to incorporate a skeptical stance into a more well-rounded perspective on ideas and knowledge.