C-Poll

The latest C-Poll is closed. You can read all about it here!

February 24, 2016

On the all-too-common phenomenon of assuming one's ideological foes to be ignorant, immoral, or insane

Aaron Ross Powell has written a mostly-excellent essay on the extreme tribalism (although he doesn't call it that) which underlies the emotions aroused by virtually every contentious issue of the day.  As I opined in a previous post, the positions we take so earnestly are the rational end of a long chain of propositions that go all the way back to the one or more foundational propositions that form the foundation of our worldview (and are usually accepted without question).

Because our conclusions about the issue of the moment are so mindbogglingly obvious to us, we don't see how anybody could come to any other conclusion. If they do, it's a sure sign that they're ignorant, that they know better but are just too morally corrupt to embrace the right view, or that they are genuinely mentally ill.  I don't think I am amiss in assuming that most of us have felt this way about our ideological foes at one time or another, or have been the target of such speculation.

Here's how Powell illustrates the conflict:
Say we believe that Policy A, which we support, will lead to good Result X. We encounter someone who instead advocates for Policy B. Because of our certainty about the evidence and how to interpret it, too many of us too often see that person’s support for Policy B coming not from a good faith and reasoned belief that Policy B is a better way to get to Result X. Because if what we believe is both correct and obvious, then the advocate of Policy B must know that it will undermine the achievement of X. And if X is a good result, then this person doesn’t just disagree with us, but actively wants something bad to happen.

Unfortunately, this all-too-common way of thinking about political debate leads to serious problems, because it means that our empirical beliefs are essentially closed to critique unless that critique comes from someone who already shares our policy preferences. If our interlocutor doesn’t share our policy preferences, then before the conversation can get off the ground, we’ve already decided he is either stupid (he’s too dumb to see his error) or immoral (he maliciously prefers evil outcomes). But, of course, if our empirical priors or interpretive framework are wrong, then someone with better priors will likely come to a different policy conclusion.

Thus individual policy preferences exist as a signal of their holder’s intelligence or moral worth—and a challenge to one’s policy preferences gets interpreted as an attack on the holder’s smarts or basic goodness. Because we believe that certain policy preferences signal moral worth, we adopt our policy preferences based on how we would like to be perceived. And we hold to those policies regardless of their actual, real-world outcomes, or pay so little attention to their outcomes that we never feel the need to revise our political preferences.
If you see nothing wrong with this phenomenon, and your chief goal in any ideological conflict is to vanquish your foes, there's not much left to talk about.  Just carry on doing what you've always done.  The current chaos that is the 2016 presidential campaign is the world you live in and apparently enjoy.

If, on the other hand, you wonder if it's even possible to coexist with people who hold points of view that bug you so much, hoping that it is possible, stick around.  Understanding this ingrained human tendency to organize into tribes is, I believe, the only alternative to putting the maximum distance possible between all human beings.  I also believe that understanding our tribal thinking is the first step to finding a way to knowing how to bring others around to our point of view. 

This is going to be a running theme on this blog as I continue to ponder the riddles of human interaction.  I hope you'll find it worth the read.

February 22, 2016

The FBI v Apple

I'm not a big fan of Apple in general, but props to them for their principled refusal to open Pandora's Box by creating a custom version of the OS that bypasses the security features on the San Bernardino terrorist's iPhone.

There's no such thing as a one-time crack.  Even if the government itself doesn't exploit the cracked version of the OS (and they'd hardly miss an opportunity to exploit it), it's not unlikely that it will escape into the wild somehow.

Image Source

February 10, 2016

Why most online arguments about politics and religion end in a stalemate

Image source

Scenario: Two people are engaged in mortal ideological combat on Facebook.  Each person argues their point passionately, genuinely baffled that what is manifestly obvious to them does not register at all with the other.  Since The Truth™ is so obvious, each combatant starts wondering why their opponent refuses to concede the point, eventually attributing such incorrigibility to bad character or, even worse, to mental illness. 

What is actually going on here?  Why do most skirmishes like these end with nobody persuaded and with bad feelings all around?

I believe this happens because the actual conflict usually does not lie in the topic itself, but in the long chain of presuppositions that lead rationally and inexorably to each person's view on the topic.  Within the closed system of a person's worldview, the point they're arguing makes complete sense to them, flowing logically from the worldview's foundational principles (i.e. the collection of assumptions forming the filter through which reality is interpreted), many or most of which are accepted as givens.

You won't get them to budge on the topic being debated unless either (a) you can convince them that the point they're arguing is inconsistent with the foundational principles of their worldview, or (b) you can somehow erode their confidence in the foundational principles themselves.

Strategy (a) is possible, but only those who embrace the same worldview are likely to see any success.  If someone of an opposing worldview tries this strategy, the natural reaction is to question the aggressor's motives (thus allowing the target to ignore the argument being made).

Even if strategy (b) has any chance of eventual success, the foundational principles aren't even part of the current debate, so forget about that.

Image source
Imagine a worldview as a tree.  The trunk and roots are the foundational principles; the opinion being argued at the moment is more akin to a cluster of leaves at the end of one of the tree's branches.  Everything between the cluster and the trunk is the chain of presuppositions connecting the opinion to the foundational principles.  Even if someone succeeds in damaging or lopping off that cluster, it'll grow back in due time -- nothing really gained or lost.

Facebook is where I'm most likely to encounter people of significantly differing worldviews.  I've developed an instinct for recognizing when, for my potential opponent, an opinion represents a leafy cluster or something closer to the trunk.  If the former, I'm more likely to refrain from joining the fray, because the most likely outcome is bad feelings and nothing else.  If the latter, I might cautiously engage, in the hope that something worthwhile might transpire.

With the leafy-cluster topics, I will occasionally come across a viewpoint that baffles me so much that I step back and ask myself, How could someone possibly hold such an opinion?  Then, over time, through observation and sometimes through tactfully-worded questioning, I will attempt to discern that opinion's chain of presuppositions.  As I analyze the presuppositions, I usually am forced to reanalyze and reaffirm the presuppositions of my own worldview.

While very rare, the process does occasionally lead me to soften or change my opinions (or at least stop thinking of my opponent as insane).  But -- and this is the point -- I came to this conclusion on my own, not as the result of being outmaneuvered in an online argument.