Learning to “Think the Unthinkable”
Although the term “post-truth” was coined by critics impressed by Brexit and the U.S. presidential campaign, post-truth is deeply rooted in the history of Western social and political theory. Steve Fuller traces this back to Plato, explores issues in theology and philosophy, and pays special attention to the Machiavellian tradition in classical sociology. The defining feature of post-truth is a strict distinction between appearance and reality—a distinction that is never fully eliminated, allowing appearance to pass itself off as reality. The question is how to gain the greatest advantage: through rapid changes in appearance (the foxes’ approach) or by stabilizing it (the lions’ approach).
Earlier research by Tetlock on how to “think the unthinkable” serves as a starting point for understanding his approach to the limits of expertise. More importantly, in our context, this research pushes experts in a decidedly post-truth direction, forcing them to treat counterfactual scenarios with the same causal and moral seriousness as actual cases. Specifically, when experts are asked to make precise judgments about alternative pasts or futures, they often have to resort to what Tetlock himself calls “taboo scenarios,” seeking a compromise between “sacred” and “secular” values. The sacred value, violated in Tetlock’s scenarios, is defined by the boundary surrounding the expert’s professional domain. The secular experimental situation forces the expert to cross this boundary, leading to judgments that may be humiliating—even if they turn out to be correct. Afterward, certain procedures, which Tetlock calls “moral cleansing,” are often needed to restore the expert’s authority. This pattern is seen in both foxes and, especially, hedgehogs, who tend to accuse the experimenter of impertinence or even outright deception for making them temporarily forget the usual standards of evidential judgment that define, for example, the impossible and the inevitable.
Moral Cleansing and Taboo Scenarios
The original source for discussing moral cleansing is Durkheim’s work, which described the sacred and profane uses of space in society. It’s no surprise that in Tetlock’s first thought experiments, devout Christians were asked to assign a monetary value to the killing of certain people or to consider the psychodynamic consequences if Jesus had been raised in a single-parent family. It’s important to clarify how Tetlock’s thought experiments provoke “taboo scenarios” in participants. They are asked to turn a difference in kind into a difference in degree, or to reduce a qualitative difference to a quantitative one. When historians or public intellectuals are lured into exploring possibilities outside their usual boundaries, the specificity of their subject begins to blur, and as a result, others in the same situation may seem just as much experts. For example, as soon as theologians seriously consider the idea of valuing human lives, economists implicitly enter the conversation. If one seriously considers Jesus’s upbringing, psychoanalysts, like Albert Schweitzer, inevitably join in.
A similar result awaits the ordinary believer who attributes too many human qualities—even in exaggerated form—to the divine. That’s why theological orthodoxy in Abrahamic religions has usually practiced analogical rather than literal theology. Without such semantic separation, the biblical claim that humans were created “in the image and likeness of God” can easily lead to theology ceding ground to sociology, as God then becomes a utopian projection, a superlative version of humanity, whose abilities we can gradually approach through collective effort. Thus, the secular nature of doctrines about human improvement and social progress is not so much due to their formal rejection of traditional religion as to their direct competition with it—for example, in secular salvation narratives developed by Comte’s positivism and Marx’s socialism.
Rationality, Responsibility, and the Secular Mind
Overall, we could say that rationality becomes a truly secular mental process once we learn to make value compromises. Decisions that were once considered, in Tetlock’s words, taboo (sacred vs. profane) or “tragic” (sacred vs. sacred) become simply “efficient.” In short, no goal is unconditional; everything has its price. Historically, this view has been associated with a heightened sense of personal responsibility for the consequences of our actions.
In other words, reason no longer possesses us or imposes a predetermined outcome on our thinking; rather, we possess reason. An interesting precedent for this is the social history of the sublimation of violence: violence, once driven by animal instincts, became the measured use of force for a higher purpose.
Explanations—and also justifications and condemnations—of outbreaks of violence in human history are easiest when such outbreaks are seen as isolated “events” with clear divisions between perpetrators and victims, as in the case of the Holocaust. However, much of this clarity is a product of retrospective illusion, shaped by narratives that privilege one side of the conflict. Violence is sublimated when corrections to such historical distortion are made by those who commit the violence. For example, in Nazi Germany, genocide unfolded gradually, becoming a cumulative, convergent effect of a program carried out mainly by indirect means.
The awareness of a common meaning behind violence was obscured by two factors in modern complex societies: the bureaucratization of government (where each official has professional authority only within a strictly limited area, much like an expert) and the adiaforic capacity of language to designate something by its defining qualities rather than by a proper name. Together, these features of rational discourse have come to be seen as a recipe for systematic dehumanization, once history judged that the Nazis would not be the victors. Had they won, their vague—or even banal (in the style of Eichmann)—understanding of their atrocities would have been normalized, perhaps in the same way that extreme poverty in the developing world remains normalized today.
I mention this not to rehabilitate the Nazis, but to set the context in which exercises in taboo scenarios, like Tetlock’s, can blur commonly accepted moral intuitions and foster more developed moral reflection. Consider the spectrum of politically permissible responses to extreme global poverty, which could—more subtly and diffusely—achieve what the Nazis likely aimed for: a well-designed form of neglect that subordinates the suffering of identifiable classes of victims to a greater promised good. In such cases, one could manipulate variables, asking participants to consider, for example, at what point does sweatshop labor become a labor camp, or starvation become torture, and so on. The point of such exercises is not to instill moral skepticism, but to break the illusion that the “immoral” is a domain clearly marked off and easily avoided with simple caution. Rather, in line with the existentialist “dirty hands” principle, the potential for immorality exists in any judgment that is ultimately “arbitrary” in the strict sense that it requires a considered decision for which the individual is personally responsible. The result of such awareness may be an improvement in our very capacity for decisiveness, tolerance, and forgiveness.
Taboo Scenarios and Scientific Progress
Even beyond the inevitable controversies surrounding moral judgments, cultivating taboo scenarios can have explosive effects. The most obvious examples come from the history of science. Imagine Tetlock as a time traveler asking a 16th-century anatomist to consider how the liver might function under conditions that clearly threaten the normal operation of the human body. For the anatomist, this would be an invitation to a taboo scenario, as it would require weighing the secular value of pure intellectual curiosity against the religious and normative conditions of anatomy itself, which limited dissection and required viewing the liver as an integral organ, not just a piece of organic matter.
Of course, over the past 500 years, the study of the human body has become so secularized that today an anatomist would find such a question entirely routine. The “liver” is now defined functionally, not substantively—not as a thing with a certain appearance, consistency, structure, or origin, but simply as whatever reliably works in the body as a chemical processing device. Meanwhile, the body itself is seen as a self-sustaining system of potentially replaceable parts, increasingly custom-made and, perhaps, routinely so if stem cell research makes a breakthrough.
I raise this point because the history of anatomy helps us see what happened to the oldest forms of expertise after the scientific revolution of the 17th century. Their subjects lost their sacred boundaries as differences in kind became differences in degree, so that two states once considered entirely different (or even strictly opposed)—such as motion and rest, living and dead, earthly and heavenly, human and animal—are now seen as two poles of a continuum, studied with the same tools and even subject to experimental manipulation.
The critical side of the scientific revolution required thought experiments just as much as Tetlock’s principle of “thinking the unthinkable,” since the goal was to provoke contradictory answers at the conceptual edges of existing expert knowledge, serving as a starting point for fundamental rethinking—such as Galileo’s reimagining of the nature of motion.
From Sacred Boundaries to Unified Science
The ultimate result of this transformation—the shift from the permanent to the changeable—was the move from Aristotle’s view of reality as a patchwork of separate domains to Newton’s uniform view, where all objects are products of the same system of laws, understood through the same cognitive processes, albeit used differently in different contexts.
This opens the door to the creation of an experimental discipline called axioetiology—a neologism from two Greek roots, meaning the study of the link between values and causes in human thought. The premise is that even the legitimacy of our basic ideas about epistemic authority—including those about science itself—requires understanding how they gained such authority, the boundaries of which can be tested, if not undermined, by proposing counterfactual historical scenarios. As Kuhn rightly argued, practicing scientists need an “Orwellian” (or “Whig”) understanding of their own history, in which all complexities and alternative past trajectories are smoothed over to create a narrative explaining why the current research frontier is where it is, and not elsewhere. Such implicit legitimizing narratives balance between two positions: the overdetermined “hedgehog” view of intellectual history and the underdetermined “fox” view. This normative balance is easily disrupted, as Tetlock does in chapters 5–7 of his book, by turning to interpretations of the past that, while not part of the legitimizing archive, are still respected by professional historians.
It would not be hard to offer modern secular scientists a Huxley-style counterfactual scenario, asking them to imagine how Western science might have achieved the comprehensive results of modern physics if Darwin had been the first to be embraced by science. They would likely find this possible, but it would be interesting to see how they would detail this alternative history, which does not deviate much from the main trends of real history. In particular, where would the motivation come from to conceptualize all reality within a finite system of mathematical laws if we start from a grounded, egalitarian view of the natural world focused on species? Of course, a history with Darwin as the first scientist would allow for the development of complex technologies, including mathematical methods for human survival across vast spaces and times. Accordingly, the physical sciences might have reached the heights of Chinese civilization. However, the Chinese did not consider it reasonable or even interesting to subsume all their knowledge under a single intellectual rubric.
Of course, our hypothetical scientists might dismiss all this scenario-building as mere fantasy, since science is defined by its track record, regardless of any theological motives in the past. The problem is that our understanding of science’s track record is so heavily influenced by bias in evaluating the past. We easily recall and even overpraise science’s empirical and practical successes, while ignoring or underestimating its costs, failures, and outright disasters. Perhaps the next frontier in “thinking the unthinkable” will be to bring together experts and laypeople from various fields to compile a balance sheet for science. I suspect the final record will be so uneven that science may need to renew its ties with theology to maintain its legitimacy in the future.