"Whenever I go into a restaurant, I order both a chicken and an egg to see which comes first"

Wednesday, October 31, 2012

Risks And Why We Take Them

We all take risks, and most of us do so irrationally. We make assumptions and then take action based on factors which have nothing to do with true likelihood. Cass Sunstein, writing in the New Republic (10.31.12) suggests three major reasons why we take risks when we really should not:

The first involves unrealistic optimism. Some of us show an optimism bias; we believe that we are peculiarly immune from serious risks.

Human beings are also subject to availability bias. When people are evaluating risks, they tend to ask whether other, similar events come to mind, thus using what social scientists call “the availability heuristic.” If you immediately think of cases in which officials exaggerated risks, and asked people to take precautions that proved to be unnecessary, you might decide that there’s nothing to fear

The final problem is motivated reasoning. For many of us, evaluation of risks is not just a matter of calculation. Our motivations, in the sense of our emotions, our desires, and our wishes, count too, and they can much affect our judgments. If you are strongly motivated to believe that capital punishment deters crime, you are likely to credit the studies that support that belief.

The worst possible scenario is when there is a Perfect Storm of all three irrational risk assessments. The likelihood of Superstorm Sandy hitting my house in DC is practically nil, I say, with unrealistic optimism; and even if the weatherman says that the chances are good, they have been wrong so many times before that I have half the basement filled with canned food, say I with availability bias; and the liberal media are so biased towards global warming that they deliberately inflate the risk, I conclude with motivated reason.

I fall more or less into this Perfect Storm category of total denial of risk.  Falling trees have never hit my car, even when I hoped that some deus ex machina would relieve me of the monthly payments; and global warming is a fiction, because who can forget the Snowmaggedon of 2010 when more than three feet of snow fell on DC? I had to dig my car out not once but three times because the new mayor wanted to show he was not like Mayor-for-Life Barry who said, referring to his administration’s poor response to heavy snow, “Get over it.  It will melt”; and scoured every spare truck from the area to come to my neighborhood.   These homeboys and West Virginia crackers didn’t have lawns in their ‘hoods, so humping up the curb and gouging out snow-covered ones which they couldn’t even imagine, was just fine with them.  No problem.

That was only a few years ago, so if there is such a thing as global warming, did it take a break that year?  Nature’s equivalent of skiing in Aspen with two feet of fresh powder?

The most important element in all this is that I have taken so many risks in my life and come out all right that shit simply cannot happen to me.  I have had condoms break and never once received the panicky “We’ve got to talk” call.  I have been seen in places I never should have been seen, and come away clean as a whistle.  I have driven totally whacked out oblivious behind Bombay Black and crystal meth and made it fine from the Village to Jersey (why was I ever going to Jersey in the first place?).  I lived in India for five years in the Sixties, and that was one constant, continuous risk of disease.  I travelled to the hellholes and shitholes of Africa but missed the bullets, the kidnappings, the horrific traffic accidents, or even getting eaten by lions in Great Zimbabwe.  So what’s the big deal about risk?

A few things Sunstein ignores.  First, the 'I-don’t-give-a-shit’ risk-benefit analysis.  I realize that bum-fucking a girl I met in a bar without a condom is risky business; but who cares?  At my age, any girl who will turn bottoms up for me is better than the best Christmas present ever.  Antony, the Triumvir of Rome, respected general, leader of the Roman people, gave it all up for an Egyptian tart.  He knew exactly the risks he was taking.  Either Cleopatra would turn tail on him and run from battle (which she did); or the fraying alliance with Octavian would finally break (which it did).  So what?  He was at the end of his career and his life, so WTF.

Second, we live in an age where the sheer complexity of life makes reasonable and rational risk assessment impossible.  Your oncologist tells you that if you do X, you will have a Y chance of the cancer returning and a Z chance of side effects; and if you do Y…etc.  Bewildering.  The choice is either to close your eyes and pick X or Y, knowing that you are flipping a coin; to force the doctor to choose for you, putting him in an uncomfortable legal position; or to run out of the room knowing that neither choice is a good one.  How many times have I seen medical opinions overturned?  And how many guys are riding the elevators up to their K Street law offices without a prostate and unable to get it up because doctors said “No prostate means no prostate cancer” and they bought the argument, afraid of risk.

I knew an older man who had begun to try to reduce risks to his health in late middle-age.  He first reduced his consumption of fats, sugar, salt, and alcohol by half, then two-thirds, then completely.  Eating dinner at his house was a trial.  The chicken had been boiled so that all fats had been rendered.  This greyish-white lump of meat sat on the plate accompanied by spice-less, half-cooked vegetables, and a baked potato with no fixin’s.  He then became obsessed with indoor air pollution, and he deployed super-sucker air purifiers which mercilessly roared night and day.  He did painstaking and meticulous research to determine which roads between Falls Church and Alexandria, Virginia had the fewest traffic fatalities.  Finally, he never left the house, and the fire department had to break the door down to retrieve him and commit him to the State loony bin.

Once you start becoming concerned with risk, you see that dangers lurk everywhere.  Your fleeting shadow of a life is even more precarious than Hamlet ever imagined; and worse than he, you worry about every combination and permutation of possibility, opportunity, and chance.  Finally in the end, exhausted, and depleted, you either say ‘fuck it’, or pray to God to guide you in the right decision, or simply let the sands of fate, chance, and fortune, cover you up like a scraggly bush in the Sahara.

Our response to risk is really what defines us.  To understand another, it is best to put on the risk-focused glasses and see how he reacts to unknown odds.  Some will resort to fatalism, whether philosophically or religiously determined. Actions are either according to God’s plan or to some random arrangement of cosmic billiard balls.  In either case, you go with the flow or play the cards dealt you, and see what comes up.

Others will resort to beating the odds.  Very American, this approach. We can do anything, solve any problem, beat the most overwhelming odds.  Nothing wimpy, flaccid, neutral, and European about us.  The weight of 2000 or more years of history means nothing when you have resolve, will, determination, and confidence.

I am a firm adherent of the ‘fuck it’ school. I have lived too long and seen too many theories go down the drain to actually trust in received wisdom, or believe in my ability to sort out conflicting theories to be able to accurately assess risk.  Life is far too short to worry all the time, or to spend endless energy in campaigning for Romney or Obama or fighting for the Preservation of the Bay.  What will happen if one or the other wins?  Not much, and in 100 years neither one will be remembered except for their gaffes and missteps. 

Is there a greater risk to the body politic if Romney wins?  I doubt it.  Should I move to the Mojave Desert where, I am told, there are no natural disasters, little air pollution, no terrorist threat, and no spoiled fish?  Not exactly.  I am happy here in Washington, DC, where I am glad that Superstorm Sandy missed us, that my lights never went out, and no tree fell on my car.

Tuesday, October 30, 2012

Why Has Racial Prejudice Increased During The Obama Years?

In a widely reported recent AP Poll (in collaboration with NORC at the University of Chicago, the University of Michigan and Stanford University) the percentage of white Americans who harbor racial prejudices against blacks has actually increased during the last four years. Jonathan Capehart in a blogpost for the New York Times (10.30.12) writes:

In 2008, anti-black attitudes were held by 48 percent of Americans surveyed. Today, that number is 51 percent. When implicit racial attitudes are measured, that statistic jumps to 56 percent. The viewpoint is even worse for Hispanics: A poll done last year showed that anti-Latino attitudes were held by 52 percent of non-Hispanic whites. On the implicit racial attitudes test, the negative views of Hispanics goes to 57 percent.

The ‘implicit racial attitudes test’ is an adaptation of a familiar psychological test which matches descriptive words with images.  Most people, for example, associate flowers with good words, insects with bad; and while avoiding the usual racially stereotypical words like ‘lazy’ researchers can elicit subtle positive or negative associations through a range of words. The test is relatively new and anyone can take it by going to the website(https://implicit.harvard.edu/implicit/demo/).

From a review of the literature generated in the short time after publication of these data, there are no confirmed reasons why this rise has occurred, although some suggest that it is because of President Obama – either the racial hatred buried in many people emerged after the unheard of and unconscionable election of a black man as President; or Obama did not do enough to use his ‘post-racial’ victory to consolidate the significant gains made in racial tolerance.

Harris [Director of the Institute for Research in African-American Studies at Columbia University], penned an op-ed for the New York Times on Sunday that argued, “The Obama presidency has already marked the decline, rather than the pinnacle, of a political vision centered on challenging racial inequality.” And he charged that black elites and intellectuals have traded the substance of what might be achieved for the symbolism of having a black man in the Oval Office.

I don’t buy the second argument, for Obama’s silence on the matter should not increase racial prejudice.  It must be do to the former – he is a lightning rod for racial hatred.  I have spent a lot of time outside Washington, DC in the past few years, mostly in small towns far from the Nation’s Capital.  I lived and worked in areas where Obama was not only a president with whom one disagreed, but a man to be detested.  The conspiracy theories to which I had paid scant attention, assuming that they were espoused by a very few, were alive and thriving.  Never in the days of Watergate when Nixon was hated and vilified by the Liberal Establishment did I hear such virulent, poisonous attacks.  Nixon was hated because he lied, he covered up, and because he was ambitious beyond his means.  Obama is hated because he is assumed to be a Muslim, a Radical Black Activist, a Socialist, and a foreign-born traitor to the American cause. 

I have pondered this for all the years of Obama’s presidency.  Why did Bill Clinton not provoke the same visceral hatred as Obama?  He was certainly as liberal as Obama if not more.  Perhaps it was because he came from Arkansas, spoke with a Southern accent, and was raised poor, just like many whites in the South.  Or perhaps it was because he really did seem to feel the pain of most people who had not attained the American dream.  Or perhaps because he was such a good communicator and connected with his audience. Or perhaps because he was white.  I hate to come to that last conclusion as many of my Eastern liberal friends have.  I would like to believe otherwise and fight what seems to me a facile conclusion.  These new data have got me thinking again.

There is more to it than Obama, however.  Racial prejudice has increased across the board, says the AP poll, not just in areas of the country, like the South, where one might expect a disproportionate racial bias.  Since prejudice usually increases in difficult times when it is easy to blame The Other for one’s misery rather than the complexity of the economic system, perhaps people react strongly to what they see is the continuing preference for minorities.  Minority favoritism, a charge often leveled when people’s sense of fairness has been damaged, might be seen as rising because of Obama, himself a minority. 

With the high unemployment rate of the past four years, it is understandable if out-of-work people take out their frustration on those who they feel have gotten a break through government programs with their taxpayer dollars, when they haven’t. Harris suggests that the extreme political polarity in the United States in the past four years might be a contributing factor.  Obama and the Democrats are being portrayed as socialist income distributors; and poverty, gender, and race merchants, thus exacerbating already sensitive feelings among the white disenfranchised.

Racial discrimination is more than anything a function of income and class.  Few whites, except perhaps in those parts of the country where such discrimination is longstanding, historical, and deeply-rooted, would object to a black doctor, lawyer, or investment banker moving in next to them.  The animus against Obama notwithstanding, if Colin Powell or Morgan Freeman lived down the street, most people would be honored that they chose their neighborhood. Since blacks still have far less social mobility than whites, and proportionately fewer blacks are gaining higher socio-economic ranks, there are still relatively few Colin Powells to move in next door.

A recent report found that 53 percent of blacks born in the bottom income quintile remain there as adults, while only 33 percent of whites do.[45] Research has also found that the children of black middle-class families are more likely to fall out of the middle class.[33]

Despite the increased presence of blacks and women in the work force over the years, women and non-whites hold jobs with less rank, authority, opportunity for advancement and pay than men and whites,[46][47] a "glass ceiling" being said to prevent them from occupying more than a very small percentage in top managerial positions.

One explanation for this is seen in the networks of genders and ethnic groups. The more managers there are in an employees' immediate work environment, the higher the employees chances of interacting and spending time with high status/income employees, the more likely these employees are to be drawn on for promotion.[48][49] As of the 1990s, the vast majority of all clerical and office workers are women, but made up less than half of all managers. Less than 15% of all managers were minorities, while roughly a quarter of all clerical and office employees were. The networks of women and minorities are simply not as strong as those of males and whites, putting them at a disadvantage in status/income mobility. (Wikipedia)

In conclusion, these data were indeed a surprise to many.  After all, we thought, after electing a black president and moving into a post-racial era, discrimination would decrease not increase.  The lessons are clear.  First, the law of unintended consequences is alive and well.  It could be that Obama himself has contributed to this phenomenon in ways we could never have predicted; and second, race is alive and well in America, and it will not disappear as an issue until true economic equality is achieved.

That could take a long, long time.

Is A Fetus A Person? Abortion And The Pain Defense

What if you knew that the fetus growing in your body could feel pain - that it responded to various aggressive stimuli in a reactive way and was trying to move out of harm’s way? Would you reconsider the abortion you had planned?

Image result for images fetus

In an article in the New York Times (10.30.12) William Egginton explores the issue and speculates on whether neurological findings such as the one suggested above could be the final necessary argument to overturn Roe vs. Wade.  If science were to conclude that the fetus in fact does feel pain, then it is a sentient being – a human being – and it then has equal rights under the Fourth Amendment.

Not so fast, argues Eggington.  The definition of a human being is not determined simply by neuro-transmitters, synapses, bio-chemical transfers, and involuntary reflexes.  Philosophers have debated the question of Being for centuries and until now the discussion has focused on consciousness.  “I think, therefore I am”, said Descartes in the 17th Century, suggesting that the best, if not only proof of existence, was awareness of one’s own consciousness. 
While Descartes considered whether a neonate or even young children might have consciousness of this kind, in the end he rejected this hypothesis, insisting on the “reflective” nature of consciousness… “I call the first and simple thoughts of children, that come to them as, for example, they feel the pain caused when some gas enclosed in their intestines distends them, or the pleasure caused by the sweetness of the food that nourishes them…. I call these direct and not reflexive thoughts; but when the young man feels something new and at the same time perceives that he has not felt the same thing before, I call this second perception a reflection, and I relate it only to the understanding, insofar as it is so attached to sensation that the two cannot be distinguished.”
Image result for images descartes

In the ensuing years, philosophers continued to focus on the nature of Being.  Although most assumed that a human soul – a more religious way of describing Being - was given by God, it was up to Man to describe its character and dimension.  This conception became increasingly important in the Age of Enlightenment, an era where God’s will and purpose was subjected to rational scrutiny.  There was never any question of either God or his intent; but it was Man’s duty to try to understand both.

In her book, Bodies of Thought: Science, Religion, and the Soul in the Early Enlightenment, Ann Thomson provides a detailed review of the debates about the nature of mind and soul from the 17th to the 19th centuries.
In spite of the fact that Willis [a 17th Century Cambridge physiologist]  subscribed to the doctrine of an immaterial soul to account for the higher intelligence of human beings, Thomson argues that ‘Willis’s work ... attracted the attention of those wanting to elaborate a [purely] material account of the mind divorced from an immaterial soul’. In her view, Willis developed conceptions of active matter which could account for the higher functions of the mind, and dispense with the need for an immaterial soul (Reviews in History, May 2010)
These debates became even more fundamental and focused on the nature of matter and whether or not the soul was material or immaterial:
Like, Locke later, Baxter [17th century theologian] argued that one would be limiting the divine power by denying that God could super-add thought and perception to matter…A consensus was gradually built up during this period among most philosophers and scientists that matter, generally conceived as consisting of inert atoms, is in itself inactive. What becomes philosophically interesting and up for debate is the question what further entities one must add to the universe to account not only for thought..(Reviews in History, May 2010)
Image result for images john locke

The Age of Enlightenment was also the age during which the concept of individual and human rights emerged, and the American Declaration of Independence and Constitution were based on the philosophy of the age.  The Founding Fathers, although schooled in rationalism and bred on an unshakeable belief that there is a Creator, and that he is the source of our ‘inalienable’ rights, had to translate these principles into legal precepts. The new American was a human being, created by God, endowed with a soul which defined his humanity; and that society – and government - should facilitate his knowledge of both God and Man.
The brain sciences, like all branches of the natural sciences, are immeasurably useful tools for individuals and communities to deploy (one hopes) for the improvement of our understanding of the natural world and the betterment of society. The basic rights guaranteed by the Constitution, however, have nothing at all to do with the truths of science. They are, to use Karl Popper’s terminology, non-falsifiable, like religious beliefs or statements about the immortality of the soul; to use Thomas Jefferson’s word, they are inalienable. Like the equality of its citizens, in other words, the basic liberties of Americans should not be dependent on the changing opinions of science
Image result for images john locke

Later in the 19th Century when rational science was emerging as a new tool to help understand the universe, Kant dismissed its ability to answer basically philosophical questions:
More than 200 years ago the German philosopher Immanuel Kant argued, in his “Critique of Pure Reason” that, while science can tell us much about the world we live in, it can tell us nothing about the existence of God, the immortality of the soul, or the origin of human freedom; moreover, he demonstrated with exquisite precision that should it try to come to conclusions about these questions, it would necessarily fall into error.
Deliberations today are no different and scholars and scientists agree on the indefinable nature of soul, self, or Being:
Current neuroscience distinguishes a spectrum of degrees of  “consciousness” among organisms, ranging from basic perception of external stimuli to fully developed self-consciousness. Even the idea of self is subject to further differentiation. The neuroscientist Antonio Damasio, for instance, distinguishes degrees of consciousness in terms of the kind of “self” wielding it: while nonhuman animals may exhibit the levels he calls proto-self and core-self, both necessary for conscious experience, he considers the autobiographical self, which provides the foundations of personal identity, to be an attribute largely limited to humans.
Which brings us back to the issue of abortion, fetuses, and pain reflex. Egginton argues that while three centuries of philosophical deliberation have concluded that a human being becomes a person when he becomes actively sentient, aware of his surroundings and from the first moments of life, tries to make sense of it all.
Consciousness, in other words, presents a much higher, and much harder-to-establish, standard than mere potentiality of life. Therefore, implicit recourse to the concept fails as a basis for replacing viability as the line dividing the permissibility of abortion from its prohibition. For a fetus to be conscious in a sense that would establish it as a fully actualized human life, according both to current neuro-scientific standards and to the philosophical tradition from which the concept stems, it would have to be capable of self-perception as well as simple perception of stimuli.
Science is and always has been one way of illuminating the search for truth, but not the only way; and to assume so is to ascribe to it a dictatorship of pseudo-rationality. Scientific ‘proof’ is an ephemeral and unattainable goal.  Even Einstein’s Theory of Relativity is being re-examined today.
When science becomes the sole or even primary arbiter of such basic notions as personhood, it ceases to be mankind’s most useful servant and threatens, instead, to become its dictator. Science does not and should not have the power to absolve individuals and communities of the responsibility to choose.

Monday, October 29, 2012

Evil II–Augustine and Kierkegaard

In my first post on the subject of evil (http://www.uncleguidosfacts.com/2012/10/does-evil-exist.html), prompted by an excellent series by Clare Carlisle of the Guardian, I discussed Augustine’s explanation that evil is not a thing in itself, but the denial of goodness.  Carlisle (in the second piece of a series) observed that Augustine need to reconcile the absolute goodness of God with the existence of evil in the world; and he could never either believe that God had a co-equal, the Devil, or that He would create a force, evil, that was contrary to him.

Augustine came to regard this cosmic dualism as heretical, since it undermined God's sovereignty. Of course, he wanted to hold on to the absolute goodness of God. But if God is the source of all things, where did evil come from? Augustine's radical answer to this question is that evil does not actually come from anywhere. Rejecting the idea that evil is a positive force, he argues that it is merely a "name for nothing other than the absence of good".

This explanation has always seemed too tame for most philosophers and for the rest of us who see people contemplating, plotting, and doing evil – a conscious, deliberate amoral act, not just an absence of an abstract.  The horrors of Stalin, Mao, Hitler, and Pol Pot are not just ‘not good’ actions, but something far worse.

Augustine, however, had a second theory, one which corresponded more to the real world:

Augustine thinks that our goodness is derived from God and completely dependent on him. "When the will abandons what is above itself and turns to what is lower, it becomes evil – not because that is evil to which it turns, but because the turning itself is wicked," he writes in The City of God.

Because we are free, Augustine argues, we must be able to choose to turn away from God, to disobey him and to try to live independently – maybe as if he didn't exist at all.

The ‘free will defense’ says that evil is a consequence of freedom; freedom is a good thing and therefore we have to accept evil as its unfortunate side-effect.

So far, so good; but given the extent of evil in the world – not just the Hitlers and Stalins but the Bernie Madoffs, Big Tobacco conspiracists, and those like Iago in Shakespeare’s Othello who appear to have no political, social, or economic motivation for their torture and destruction of others, how can we account for the fact that so many use their God-given free will to turn away from Him, from goodness, and to choose evil?

Augustine was more interested in theological philosophy – explaining theoretically the world of God and the world of Man and how the two intersected – than in answering the question of why people choose evil.  It was enough for him to explain the nature of evil and the philosophical construct – free will – which permitted it and to consider its spiritual consequences.  It took a 19th century philosopher, Soren Kierkegaard, to attempt to answer this question.  Writing when he did, he was exposed to the various social, psychological, economic, and cultural factors which might propel men towards evil; and could therefore add to Augustine’s thought without damaging his theory.

Kierkegaard

Photo of Soren Kierkegaard

Kierkegaard accepted Augustine’s conception of good and evil, especially that evil was caused by men turning away from God as a free choice, but said that this was because of a combination of pride and fear:

Kierkegaard thought that our freedom is itself a big nothing. He describes it as a yawning chasm at the heart of human existence, which has to be filled with decisions and actions. But of course this emptiness can never be filled. When we look down into the abyss of our freedom, says Kierkegaard, we feel sick and dizzy. We take a step back. All that nothingness makes us anxious. We are, in fact, afraid of our own freedom.

Kierkegaard agreed with Augustine that human beings are fundamentally proud, always wanting to overreach themselves, transgress any limits imposed on them, and deny their dependence on God. But he also emphasized that we are as fearful as we are proud – that we shrink back from the unlimited dimension of our being, which is freedom. This makes us very contrary creatures: we think we want to be free of all constraint, but at the same time this freedom terrifies us. Human sinfulness, says Kierkegaard, is a result of this unhappy combination of pride and fear.

This in itself still does not explain why some of us choose evil actions over good ones – both would help fill ‘the yawning chasm’. Kierkegaard offers the following:

Our failure to be good, he argues, is due to the way we deal with being both less and more free than we wish to be. Like stroppy, insecure teenagers, we crave independence, resent authority, and are scared to take responsibility for ourselves.

In other words, like teenagers who have not reached a level of maturity to analyze the ethical and moral consequences of their actions, we act impulsively.  The fear of a chaotic world of choice, unknown risk and consequence, and endless deliberation is such, that we act before we think.

Jean-Paul Sartre echoed Kierkegaard in his philosophy of Being and Nothingness:

Sartre contends that human existence is a conundrum whereby each of us exists, for as long as we live, within an overall condition of nothingness (no thing-ness)—that ultimately allows for free consciousness. But simultaneously, within our being (in the physical world), we are constrained to make continuous, conscious choices.

It is this dichotomy that causes anguish, because choice (subjectivity) represents a limit on freedom within an otherwise unbridled range of thoughts. Subsequently, humans seek to flee our anguish through action-oriented constructs such as escapes, visualizations, or visions (such as dreams) designed to lead us toward some meaningful end, such as necessity, destiny, determinism (God), etc. (Wikipedia)

Where Kierkegaard sees this anguish translated into evil actions, Sartre seems them as neither good nor evil but steps to constructing a being, a personality, a character which gives some meaning to life, although artificial.

Although the vision of Sartre resonates more with me than that of either Kierkegaard or Augustine, I prefer that of Machiavelli and Nietzsche to any of these (see my blog post, above).  Machiavelli (and his advocate, Shakespeare) believed that there was no such thing as good or evil, just variations of human actions determined by human nature; our social, cultural, and economic ecology; and the precedent of history.  Nietzsche agreed, but added the element of will.  The greatest expression of any human being, confronted as he is by the valueless (meaninglessness) of life, is to follow his own primitive, basic, fundamental desires and ambitions to their logical conclusion.  Tamburlaine was a greater man than the grocery clerk.

Sunday, October 28, 2012

James Madison - A Need To Revisit His Founding Principles And Question The Very Nature Of Government

We live in an age of Big Government, and the issue today is only how to reduce its size from the behemoth it is, not to seriously rethink the recent trajectory which has moved the Republic from the vision of of 18th Century American political philosophers – one of simple protection of individual freedom and property to the bewildering array of government intervention in all spheres of American life.  James A. Dorn in The Scope of Government in a Free Society (Cato Journal,Volume 32 Number 3, Fall 2012) has shown just how far we have come:
The Framers of the U.S. Constitution took it as “self-evident that all men are created equal” and have “unalienable rights”—including the rights to “life, liberty, and the pursuit of happiness.” Those rights, as expressed in the Declaration of Independence, were incorporated in the broad rubric of property—understood by Madison as “everything to which a man may attach a value and have a right; and which leaves to everyone else the like advantage”.Madison held that the legitimate and primary function of a just government is “to protect property of every sort; as well that which lies in the various rights of individuals, as that which the term particularly expresses”.
Thus, in the Madisonian vision of government and society, there is no separation between good government, personal liberty, private property, and justice.
Madison’s constitutional republic was to be one of limited powers under a rule of law, rather than an intrusive state aimed at redistributing income and wealth via the democratic process”.  
 Image result for images james madison

Thomas Jefferson echoed Madison’s convictions:
The “sum of good government” is to “restrain men from injuring one another,” to “leave them . . . free to regulate their own pursuits of industry and improvement,” and to “not take from the mouth of labor the bread it has
earned.”
The key element in this calculus was the concept of justice.  Since the protection of property – the locus of various rights of the individual – is enshrined in the Constitution, then a government system of justice to guarantee and guard these rights, should be the principal role of the State.
The basis of the U.S. experiment in designing a system of government
to “secure the blessings of liberty” was the principle of consent.
Within a regime protecting individual rights to life, liberty, and property,
people would be free to pursue their own happiness without
interfering with the equal rights of others.
 Image result for images thomas jefferson

There was no intent in the Framers’ minds to expand the role of government to anything more, especially not to be the guarantor of the General Welfare:
The purpose of the law of the Constitution, as stated in the Preamble was to “establish Justice, insure domestic tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty.” To do so, the Framers strictly
limited the powers of the federal government, enumerating them in Article 1, Section 8. In particular, there is no evidence that the Framers envisioned the General Welfare Clause as a blanket provision for expanding the size and scope of government.
Madison himself, three decades later reflected on the wisdom of those decisions which were taken to limit the role of government to preserving and protecting individual rights:
“With respect to the words “general welfare,” I have always regarded them as qualified by the detail of powers connected with them. To take them in a literal and unlimited sense would be a metamorphosis of the Constitution into a character which there is a host of proofs was not contemplated by its creators.”
More than anything, government was created to protect the individual from injustice; and the 19th Century political philosopher Frederic Bastiat said it best:
When law and force confine a man within the bounds of justice, they… do not infringe on his personality, or his liberty or his property. They merely safeguard the personality, the liberty, and the property of others. They stand on the defensive; they defend the equal rights of all. They fulfill a mission whose harmlessness is evident, whose utility is palpable, and whose legitimacy is uncontested.
Bastiat foresaw with great prescience the tumult and illogical chaos which would occur if and when government began intervening in the affairs of individuals, society and the market.  Writing in 1850 he said:
If you make of the law an instrument of plunder for the benefit of particular individuals or classes, . . . there will be tumult at the door of the legislative chamber; there will be an implacable struggle within it, intellectual confusion, [and] the end of all morality. . . . Government will be held responsible for everyone’s existence and will bend under the weight of such a responsibility.
Madison explicitly stated his opposition to government expanding its role beyond this fundamental obligation.  These comments are reflected in conservative views today:
A just security to property is not afforded by that government under which unequal taxes oppress one species of property and reward another species; where arbitrary taxes invade the domestic sanctuaries of the rich, and excessive taxes grind the faces of the poor; where the keenness and competitions of want are deemed an insufficient spur to labor, and taxes are again applied by an unfeeling policy, as another spur.
Note that Madison, although insistent that government not be instrumental in redistributing wealth, but leaving all economic configurations to private enterprise, did not disavow taxation, just the use of tax policy to implement political objectives. This, of course, is the conservative objection to current tax policy and the insistent demand for reform.  In our tax laws the hand of government is everywhere from the housing deduction which government uses as an indirect dirigiste policy to promote the housing industry, to variable capital gains taxes, to tax breaks, credits, and tax holidays, all of which are designed and implemented according to the prevailing political agenda.

Madison, believing as he did in the primacy of personal property as the locus and repository of individual rights, also believed in the freedom to create wealth from its expansion. Above all, government should not stand in the way of such growth – nor should it intervene in the process:
Madison consistently argued that “in a just and free Government . . . the rights both of property and of persons ought to be effectively guarded.” He also recognized that doing so would “encourage industry by securing the enjoyment of its fruits.” In his speech in the First Congress on April 9, 1789, Madison emphasized that “commercial shackles are
generally unjust, oppressive, and impolitic.” Moreover, he held that “all are benefited by exchange, and the less this exchange is cramped by Government, the greater are the proportions of benefit to each”
Many progressives will reject or choose to ignore the Constitutional principles first enunciated by the Founding Fathers, saying that much has changed in the ensuing 225 years, Madison and Jefferson could never have anticipated the size and complexity of the Republic, and of course they would have given government more scope if they could possibly have known.  This, of course, is self-serving denial.  If anything, a consideration of how to reconfigure the American political and economic landscape is even more important, for as Bastiat and others predicted in the 19th century, a rampant State is disruptive, erosive, and counter-productive.

Saturday, October 27, 2012

Confusion

We have all been confused at one time or another; and in fact confusion is as much a part of life as order and predictability.  I am confused when I am reading one of Shakespeare’s Histories for the first time, trying again and again to sort out who did what to whom and whose claim to the throne is legitimate.  Richard II was one of five sons of Richard, The Black Prince, and they were all squabbling, demanding their rights, and setting the stage for brutal War of the Roses.  My patience paid off, for the more carefully I read (and consulted the genealogical chart that showed lineage), the more the confusion dissipated, and the more the historical complexity receded into the background that Shakespeare intended.  I could appreciate Richard II for who he was, the Poet King who spoke some of Shakespeare’s most poignant and insightful lines.

I am confused in a new supermarket, wandering like a blind man up and down the unfamiliar aisles trying to apply what I have learned from my Whole Foods in DC to the Kroger in Columbus, MS.

For years I have understood that limiting confusion was important.  The more I could limit the unimportant or non-essential choices in my life, the freer and more uncluttered my mind would be for more serious matters.  I found my philosophical match in India, where according to Hinduism, all basic, practical, and earthly activities should be carried out according to age-old Vedic prescriptions.  When you got up, bathed, made love, married, ate, worked, or prayed were all written down and codified.  Nothing was left to chance.  The goal of life was, after all, not earthly success, but spiritual enlightenment, and the mind must be free to contemplate God.

I was watching The Fly a modern remake with Jeff Goldblum of the Vincent Price classic of the Fifties.  The later version is funny, with a black, twisted humor, and you really like Goldblum as wiry hair pops out on his back, his teeth begin to fall out, and he has urges to swing from the rafters. 

There is one scene in the movie which I remember clearly.  Goldblum’s girlfriend asks him why he never changes his clothes.  He is perplexed.  “I change my clothes every day”, he replies, and opens the armoire to display his rack of suits, shirts, ties, and shoes, all exactly the same.  “It helps me focus”, he says, “If I don’t have to think about what to wear”.  My sentiments exactly.

I am lucky to be married to a woman who indulges me and helps me keep my life clear of practical obstacles, like doing the taxes, balancing the checkbook, and hiring gardeners to keep the wild backyard in check.  Unaccustomed as I am to any kind of practical decision which I know will produce confusion (like buying anything with the bewildering array of choices there are for everything today), I am paralyzed when buying a new DVD player, clothes, or God forbid, a used car.  My wife says that I take ‘the easy way out’ since I wait until I have holes in my pants before buying a new pair, or invest thousands in my old clunker just to avoid having to buy a new one; but this is not fair.  My ‘keep the rubber on the road’ philosophy helps me to keep the confusion out of my life.  The problem is that the older I get, my obsession with simplicity gets in my way.  I avoid the unpredictable, the risky, or the unforeseen.

Giles Fraser, writing in the Guardian (10.26.12) provides another take on confusion.  Confusion, and the anxiety it produces, forces us to accept simplistic arguments, thus limiting our universe to the patent, the predictable, and the obvious:

But confusion generates anxiety, and the desire for anxiety reduction is such that one is easily tempted to subscribe to any sort of explanation of things, so long as it regulates confusion and thus diminishes the anxiety.

This is what often deters us from stretching our worldview and imagining things turned upside down. Part of the reason I prefer religion over scientific atheism is that I find atheism to be less tolerant of confusion and disorder. It's easy to subscribe to some general pret-a-porter philosophy that puts everything neatly into its ontological place; but the price you pay is a smaller, diminished world. Or maybe that's the gain: after all, a diminished world is less threatening.

For Fraser, reading Aquinas and Augustine, the tracts of Martin Luther, the Hebrew version of the Old Testament, Greek origins of Christian philosophy; experiencing the spare, absolute faith of Islam; or sorting through the multiplicity of Hindu gods to see the perfect order and symmetry of the religion to come to some conclusion about the existence and nature of God are infinitely more rewarding than simply saying, as atheists do, I don’t believe.  There is an innate reward in wandering through complexity until one finds ones way.

Of course, I'm not really trying to defend confusion as a permanent condition. Rather, I'm trying to defend its more illustrious cousins – puzzlement, wonder and adventurous curiosity. This posher side of the family are often snooty about confusion, wanting to distance themselves from its messy and circuitous ways. Yet Wittgenstein is surely right when he says that "I do not know my way about" is the basic form of a philosophical question.

I am a creature of three worlds – the Hindu one in which confusion reigns in an illusory world, and the only right path is to neutralize it through discipline and order; the Jeff Goldblum one, where non-essential choices must be reduced or even eliminated to allow the mind to use its limitless potential; and the Fraser one in which intellectual challenge and achievement is perhaps the most valid expression of human existence.

I am about to go to the store where I will go to the same aisle of the supermarket, to buy the same tuna fish and condensed milk.  I will take the same route to get there that I have always taken, park in the same spot if it is available, and proceed through the store as I have done for 10 years.  All the while I will be sorting out a particular conundrum – Why, really, did Othello murder Desdemona?