In 1960, William Newcomb, a physicist at
the Lawrence Livermore Laboratory, concocted what might be called a greed paradox.
He asked people to imagine a game in which players are presented with two boxes.
One of them is transparent and always has $1000 visibly inside it, while the
other is opaque and contains either a cool million bucks or nothing at all.
Players have the choice of taking both boxes or only the opaque one, and either
way they get to keep all the money in the box or boxes that they choose. A simple,
yet always remunerative game.
Newcomb’s rules prescribe that every
participant has the following absolutely reliable information:
1.
Whoever puts the money in the boxes is a
prognosticator. If she predicts the next player will take only the million-dollar
box, she will always put the money in there. If she expects the player to take
both boxes, she will never put even a dime in the opaque box.
2.
This prognosticator has always been
correct. Historically, every prior player who has taken both boxes has gone
home only with the paltry grand. But those who pick just the opaque box leave
rich. (Keep in mind that a million bucks was a very tidy sum in 1960).
3.
There is no jiggery-pokery of any kind.
The predictor never cheats, for example by somehow sneaking money in or out of
the million-dollar box after hearing a choice made. The money is either in
there or it isn’t prior to the announcement of a participant’s decision, and no
change is made afterwards. There’s neither any fancy technology nor any old-fashioned
legerdemain in play: just incredible predictive accuracy.
The literature suggests that there are generally
two diametrically opposed attitudes maintained by those who consider Newcomb’s
brainchild. The logical, apparently scientific sort can always be expected to
take both boxes. I can almost hear one such scholar pleading, “How can it ever
make sense not to take both? The money is either in the second box or it isn’t;
my choice can’t affect the outcome. Obviously, we should take all the
money being offered rather than settle for only a portion of it!”
But the pragmatist will be more interested
in past results than in such scientific truisms. She wants to join the
millionaire club and may not care so much (at least not while playing) about
how the historical results are even credible. She’s been assured both of the uniformity
of the past outcomes and of the integrity of those running the game. Concerns regarding
how such apparently baffling results have occurred will seem to her purely
academic. It may be mystifying that the choice to take all the money on
the table is never as financially rewarding as just opting for some of it,
but this may pale before her employer’s warnings about possible upcoming layoffs
and her recognition that mortgage payments will continue to be required by her
bank until hell freezes over.
Now, one might think that this entire debate
is purely academic and doesn’t much matter in the real world. Presumably, after
writing up his intriguing little paradox, Professor Newcomb went back to his
radiation studies and then maybe listened to a radio show about why the California
Democratic Delegation chose not to support JFK at the 1960 National Convention;
or maybe he just caught a movie. After all, even if his thought experiment was cool,
the real world doesn’t actually contain games where causality goes on holiday like
that. Maybe Pascal’s Wager, according to which it’s silly not to believe in God
once you consider the incredible benefits promised to the devout, had long been
touted as a good reason for theism, but like million dollar promises, that pie
has also always been in the sky. Going along with something just because doing
so gives you a kind of wonderful glow, is naïve, nothing but a sucker’s game.
But what about when naivety seems to make
good sense? For it seems to do so, at least in the area of medicine. Not long
ago, a neighbor of mine whose entire family had just gotten over bouts of
COVID, told me that her doctor had prescribed ivermectin to each of them.
“Well,” I quipped, “it certainly seems to
do its job in keeping my dog free of heartworms.”
“Oh,” she shot back with a visible
eye-roll, “the human dosage is very different. Anyhow, that drug was a
game-changer for us.”
In the area of human health—in fact, human
well-being generally—the benefits of “going along with a trusting heart” has
been well known at least since Henry K. Beecher’s 1955 publication of “The
Powerful Placebo” in The Journal of the American Medical Association.
Beecher’s article described the first clinical studies of something care-givers
had known for centuries: sometimes the only causal connection that is necessary
for improvement of physical symptoms seems to be faith in the healthcare
provider. Here, too, one can take the same tack as those who paradoxically
eschew the second box based on nothing but the guarantee of an ostensibly sage
prognosticator. Given the power of “wisdom” (real or imagined) it’s no surprise
that items with exotic names like ivermectin and hydroxychloroquine took off as
plausible COVID remedies while the more mundane recommendations of bleach and
internal lights were, if not always scorned, at least almost universally
ignored. Sufferers want something that at least sounds like science—pseudo
or otherwise. And a number of studies[1]
indicate that we can again find two different sorts of participants in the game
who will obtain differing levels of success: the skeptics and the arguably more
credulous pragmatists.
Now it’s important to distinguish this
effect from what happens in a traditional con, where there’s no overall or
long-term benefit for the “rube” because all the real goodies go to the
sophisticate running the game. Being “fooled” is a different matter in the area
of one’s health. Just as it seems eminently sensible to choose to become an
instantaneous fat cat rather than retain a foolish pride in one’s high level of
“scientific consistency,” it also seems obviously better for a sick person to grab
an opportunity for a quick and easy return to health rather than worry overmuch
about what “Big Pharma” can or can’t prove to the satisfaction of a bunch of government
bureaucrats (who, after all, may just be pointy-headed know-it-alls or useless patronage
hires). The point is that this formerly maligned “gullibility” can reasonably
be claimed to be valuable across more domains than have been regularly
recognized—and thus, a perfectly sensible approach. And while it’s possible
that a bad actor might profit from a bit of hyperbole, those winnings might be
swamped by benefits either to the alleged “greenhorn” or to the community at
large. So, it’s a mistake to treat my neighbor’s choice as analogous to a Ponzi
scheme being perpetrated on an unsuspecting lamb.
Consider a few other areas where this sort
of effect can be seen. First, the movies. Perhaps many of us can recall the
reaction they had upon first learning that the movie Fargo (1996) wasn’t
really based on actual murders in North Dakota, as is stoutly claimed in its
opening moments.[2] Director
Joel Coen’s thinking was clearly that the film would be more effective if it
proclaimed that it was based on events that really occurred. He likely thought,
“Hey, it’s a movie, and movies are allowed to be fictional, so it doesn’t
matter whether it’s true when this particular one claims it isn’t fiction.”
I myself felt a sort of betrayal when I found out about that move, but it was
wedded to a kind of vertigo, because I couldn’t deny that Coen was right. The
movie really was more effective because I bought the lie that it was a
docudrama. So the disinformation didn’t just benefit the movie-makers. And, many
will ask, what’s the harm? If anyone is terribly interested, they can just look
it up and they’ll find out exactly how much actual truth there was in Coen’s
grisly story. If it turns out that there was even less than found in the
Hollywood telling of Butch Cassidy’s life, so what? Maybe the truth-stretching film
makes more money, but that’s because moviegoers have benefited from increased
astonishment. Who is hurt?
Next, consider a modest institution which,
since its founding in 1988, has been considered a paragon of harmless wonder
generation via its clever use of “the little white lie.” I’m referring to an
odd storefront attraction just northwest of the Culver City section of Los
Angeles called “The Museum of Jurassic Technology.” If the place remains a bit
too eccentric to categorize even after a visit or two, an engaging book[3]
by Lawrence Wechsler about the museum and its brilliant originator and continuing
curator, David Wilson, may help. Wechsler seems to start out sharing my bewilderment
at Wilson’s systematic doling out of cloudy but convincing half-truths, but he
ends up lionizing the exhibitor for his appreciation and effective use of wonder.
Among Wilson’s most baffling creations are a couple of quite believable but
entirely fabricated pseudo-scientists. One, called Geoffrey Sonnabend, is
supposed to have written a three-volume tome on the mechanism of memory while
he was a faculty member at Northwestern University in the 1940s. Sonnabend’s
“plane-and-cone theory of obliscence” doesn’t seem that crazy once one spends
a bit of time deciphering the framed diagrammatic models on the museum wall,
and even if it is a bit cuckoo, lots of weird hypotheses have been
proposed over the years by quack “scientists”—especially those focusing on “the
mind.” Both Sonnabend’s speculations and his personal history are admittedly vague
and quite bizarre, but how much midcentury American neurophysiology is a casual
museum visitor supposed to have at her fingertips? And what is unreasonable to imagine
about the effects scholarly obsessions might have on a sensitive, middle-aged
psychologist? After all, Sonnabend (in common with the absolutely non-fictional
German scientist, Gustave Fechner, by the way) was said to have suffered a
severe nervous breakdown prior to coming up with his off-kilter theory. And isn’t
our faculty of memory mysterious by its very nature? I mean, what, exactly is
“the past,” anyhow? If it doesn’t exist anymore, how can we manage to have
access to it? If facts are weird, why can’t they have equally weird
explanations and explainers?
Wilson’s museum is piled high with such
stuff—much of it poorly lit and purposely askew. One exhibit may cause us to
reflect that, while we are quite sure that the breath of a duck has never cured
a single human ailment of any kind, we aren’t quite so certain that no
now-crumbling witchcraft book (maybe a Bulgarian one?) ever suggested the
effectiveness of such a cure. Is Wilson’s duck breath display approvably
“historical” if what it claims can be found in a suitably old or exotic
book? Is it enough that some person or group did once believe in the
duck breath cure? If the bogus Professor Sonnabend is a sufficiently reasonable
facsimile of actual pseudo-scientists to make for a legitimate museum exhibit, maybe
it’s also enough that some person or group might have believed (in the
now “obliscent” past) that the breath of a duck was curative for ague or quinsy.
So why fight the delicious experience enabled by our ingenuousness, no matter
how naïve it makes us seem? Who is the buzz-kill who would fault anyone merely for
exhibiting a childlike innocence?
Wechsler quotes Einstein’s remark that
“The most beautiful experience we can have is the mysterious….Whoever does not
know it can no longer wonder, no longer marvel, is as good as dead, and his
eyes are dimmed.”[4]
Wechsler adds that Wilson’s attitude seems to be that the “delicious confusion”
produced by a visit to the Museum of Jurassic Technology “may constitute the
most blessedly wonderful thing about being human” (p. 51).
Well, I can’t deny that the place is a
kick. But is there really no societal cost at all associated with shrouding the
very idea of truth in an everlasting ambiguity? Everyone knows that cheap and
widespread dissemination of what is now widely called “disinformation” is available
nearly everywhere today, and this fact is seen by some as being an existential
danger to civil society.
It certainly can’t be disputed that a lot
of new work on disinformation is currently being published. In fact, I’ve
reviewed a few books on that matter myself.[5]
One recent one I had been intending to write something about caused me to think
more about a possible connection between the accommodation of an (apparently
innocent) wonder and the known dangers of “fake news.” It was the late David
Graeber’s final published work, Pirate Enlightenment, or the Real Libertalia.
Graeber was an anthropologist, but is probably at least as well known for being
a firebrand anarchist, and as I began to read, I realized that the book was at
least as much activism as it was anthropology. My interest in this work
centered on the author’s claims about the effects on enlightenment thinking produced
by certain democratic experiments allegedly taking place in several early 18th
Century pirate kingdoms in Madagascar. But it wasn’t long before I began to
suspect that a significant portion of Graeber’s history was based on wishful
thinking. His goal was to credit such “great utopian experiments” as
Libertalia, a storied pirate republic, for certain egalitarian and democratic
ideas that would later be found in the works of Montesquieu, Voltaire, and
other enlightenment figures.
To be fair, Graeber explicitly admits that
there’s never been much evidence for Libertalia’s existence. But he nevertheless
believed that white, male Europeans have taken far too much credit for the
intellectual breakthroughs of the 18th Century, and he noticed that the
description of Libertalia in the 1724 General History of the Pyrates (by
one “Captain Charles Johnson”) depicts a much more diverse origin for majority
rule, equal rights for women, the jury system, decent treatment of laborers,
and so on. Furthermore, Graeber argued that even if Libertalia was entirely
made up, there were certainly other pirate collectives around at that time,
some of them in Madagascar (and thus possibly infused with multi-racial and
women-dominated institutions) that could reasonably be inferred to have had
significant influence on Enlightenment thinking.
Naturally, most of Graeber’s readers will share
with me the characteristic of having no knowledge whatever of what was
happening around the Indian Ocean and its islands during the early 1700s. Given
this ignorance, what attitude should readers take toward Graeber’s assertions
about the intellectual origins of modern democracy? Suppose we do what it was
suggested above that Fargo skeptics do: perform a little independent
research. For example, we could pick up The General History of the Pyrates ourselves
and skim a few chapters to see what strikes us as plausible. If that’s
our tack, the first thing we’re likely to notice while hunting around for a copy
of the book, is that about half of the numerous available versions will be
attributed to Daniel Defoe, the author of Robinson Crusoe, while the
rest won’t mention Defoe at all, but instead give the author as Captain
Johnson. So, perhaps, we will move on to Wikipedia and see what the (possibly
self-appointed) experts there say about Libertalia. There we will find a
thorough description and history of the community’s supposed leader, a certain
Captain Misson, but will encounter no mention of Defoe’s possible pseudonymous authorship
of the story. If we continue our investigation by looking at the Wiki pages for
both Defoe and The General History of Pyrates, we will see that
there is no evidence for Libertalia besides what can be found in that book, which,
since the 1930s has almost unanimously been held to have been written by Defoe.
Does that settle matters? Not at all! We
will also discover that the most recent, detailed scholarship absolutely denies
that Defoe could have known enough about the geography of Madagascar to have
been the writer.
It's all a bit dizzying, but, like the
other examples given above, Graeber’s thesis was partly dependent on the
accepting spirit of his readers. He had a point that he believed was important,
and it was one that he knew could be more effectively made if certain questionable
facts were quietly assumed to be in evidence. This is not to suggest that he should
be thought of as cruelly taking advantage of his readers. He was simply moved
by what he took to be an important thesis that could make the world better if somebody
would successfully advance it.
Suppose he was right. Let’s say we agree
that women, blacks, and Muslims have never gotten the credit they deserve for
intellectual advances in the West. We might then try to defend his maneuver by
noting that, in addition to righting that wrong, it is no worse than that of the
ivermectin prescriber, Newcomb’s perfect predictor, or the inventor of a
counterfeit murderer or midwestern neurophysiologist for purposes of
entertainment. What is gained, we might ask, by being overly scrupulous? After
all, sometimes the facts are simply impossible to obtain no matter how persnickety
we are. We know, for example, that it’s very unlikely that any new information
about Malagasy pirate societies will be unearthed in the foreseeable future. Can’t
we just go with our gut and allow society to reap the benefits?
I don’t think so. It rather seems to me
that the feeling of seasickness produced by a combination of an obstinate
inability to discern the actual facts of some matter with a representation by
an advocate (even one with whose societal goals we find congenial) is a warning
that we should take seriously. It provides us with a good reason not to just
go on. Well, what is my argument for this harsh position? How did what seemed
to have been attitudes of innocent altruism suddenly turn into cases of
culpable negligence? The answer is that times have changed dramatically since
Beecher and Newcomb published the works for which they are now remembered—indeed,
even since Coen may have reasoned that troubled viewers could just “look it up.”
The fact is, we’re in a much more precarious world now than we were even a
decade ago. Today, AI might be responsible for the design and composition of
every person who can be seen cheering in a political advertisement (or beer
commercial[6]),
and ChatGPT may, in a single minute, have written every paper that some hapless
English Literature professor is now dutifully grading. Rather than involving a quaint
“Jurassic” description of the intent of an obscure (if even real!) Danish monarch
to suppress a group of bashful artisans who are claimed to be capable of carving
incredibly intricate pieces of controversial art onto cherry pits, the
conspiracies now may concern the drinking of children’s blood by members of a large
political party in the basement of a Washington, D.C. pizza shop. While both indictments
might harm the falsely accused, an incredibly wide distribution of nonsense is
now so inexpensive, so easy, that the perils are unimaginably greater. No one
is—or was ever—likely to bring a car full of weaponry to threaten the (real or
imagined) former Danish King because of his allegedly severe treatment of a commune
of (possible) artists.
It’s just different now. Many of us wake
up each day to find our email inboxes filled with fantastic proclamations about
the war Biden has secretly declared on China or a stunning proof of Trump’s murder
of three of his former mistresses. We may even see something suggesting that Q believes
we’ll all be enslaved by Venusian Democrats in precisely one month’s time. In
today’s political and technological environment, there simply seems to be a newly
born duty of increased diligence.
Does this mean we should now refuse the
million-dollar box and laugh off any nutritional supplement not backed by a
double-blind study? Not necessarily. The moral may be only that we need to be increasingly
skeptical of assurances, and not just those regarding past winners of games
like Newcomb’s or that are made by people supposedly cured by a newly hyped
supplement. We need also to be generally wary of those making the most
“wonderful” political promises and charges of “incredible” evildoing. Some of
this work is easy and arguably has no real-world consequences. Consider “Do we
really know that the chooser of the single box always got more money in the
past? What’s the evidence for that wildly counterintuitive claim?” There’s no
hard lifting there, and no one is likely to get angry at us for asking. But others
are much trickier: “Who is funding the marketing for the drug or political prognosticator?
Is there any reasonably impartial science that supports your allegation? Are
there really no dangers associated with taking your word for this without
evidence?” Of course, sometimes the matter at hand will be deeply vague and
uncertain. There just may not be any sufficiently impartial experts who can be relied
on to provide the authorship and degree of truthfulness of a particular history
book or who really grasp what Republicans are currently thinking about
abortion or how Democrats really feel about immigration. In such
cases, it may be better to withhold our judgment completely rather than just go
on—which would be to go off half-cocked.
In modern capitalistic democracies, every
voter and every consumer is thought to be in a position of (at least a teensy
bit of) power. That means that reliable information is essential if we aren’t
going to make an even worse mess of the world than we have already. And as we
have this morsel of power both to vote and purchase, there’s little doubt that
if an interested person or group sees a way to alter our preferences through
the use of wondrous untruths, they will try to do so. In a word, we need to
understand that wonder is sublimely seductive, and the modern world makes it
extremely easy to partake of items that are almost too amazing to be
believed. So, in these CGI times, we should try to keep in mind that hard
evidence and truth have deeper, longer-lasting value even than such glorious
marvels as are most wondrous to behold.
[1] See, e.g., Zhou,
Wei, et al, “The Influence of Expectancy Level and
Personal Characteristics on Placebo Effects: Psychological Underpinnings” in Frontiers
in Psychiatry (2019; 10:20).
[2] I understand that
the 1974 Chainsaw Massacre also falsely claimed to be based on real
crimes.
[3] Mr Wilson’s
Cabinet of Wonder (1995).
[4] Ideas and
Opinions (1954).
[5] See, for example
the reviews of books by Sophia Rosenfeld, Rick Hasen, and Lani Watson here: https://www.3-16am.co.uk/articles/.c/a-hornbook-of-democracy-book-reviews
1 comment:
Wonderful essay, Walter; filled with enlightening examples!
Seduction by wonder is the perfect challenger to seduction by truth, which we debated on Facebook.
I think it is a very fruitful question whether "truth [has] deeper, longer-lasting value...than such glorious marvels as are most wondrous to behold".
Let's discuss over our next coffee together!
Post a Comment