"Whenever I go into a restaurant, I order both a chicken and an egg to see which comes first"

Tuesday, July 31, 2012

Freaks, Dwarfs, And Boors

Thomas Dickie has written an interesting book entitled Cruelty and Laughter: Forgotten Comic Literature and the Unsentimental 18th Century reviewed in the London Review of Books by Thomas Keymer http://www.lrb.co.uk/v34/n15/thomas-keymer/freaks-dwarfs-and-boors.  The 18th Century is most often thought of as the Age of Enlightenment which provided our own Founding Fathers with the philosophical foundation for the new republic but it was also an era in which laughing at deformity, misery, rape, and all manners of shortness, fatness, and ugliness was commonplace.  Two questions are posed by the book, but neither are sufficiently answered.

Was the 18th Century, despite its historic intellectual contributions, the emergence of revolutionary democracy, and its remarkable rejection of medieval superstition and cant, stupid, unevolved, and still culturally backward? Second, if the century was none of these things, then why did it unabashedly revel in what we would consider today as unacceptable, unenlightened behavior?  Is there something about such jokes and hilarity that serves a purpose which has disappeared today? Or do we all laugh inwardly at ‘freaks, dwarfs, and boors’?

Compassion was invented in the 18th century, or so the story goes. Sensibility and sympathy were the wellspring of benevolent action and the glue of society (Adam Smith). There were no qualities more admirable ‘than beneficence and humanity … or whatever proceeds from a tender sympathy with others’ (David Hume). Fashionable poems deplored slavery and child labour, and wrung tears from the public on behalf of the distressed. Sterne assured his readers that his purpose in A Sentimental Journey (1768) ‘was to teach us to love the world and our fellow creatures better than we do – so it runs most upon those gentler passions and affections, which aid so much to it.’

Not everyone was sympathetic to forms of woe – especially to deformities. In Cruelty and Laughter, Simon Dickie mounts a compelling case against what he calls ‘the politeness-sensibility paradigm’, by resurrecting a jeering counter-discourse that revelled in human suffering and physical affliction.

With their unrepentant nastiness and gloating delight in other people’s pain, the ubiquitous jestbooks gleefully up-end the official values of the age. The humanitarian sensibilities we associate with the Enlightenment are nowhere to be seen. In compilations with titles like England’s Witty and Ingenious Jester, The Buck’s Pocket Companion and Fun for the Parlour, blind women are walked into walls, crutches are stolen from one-legged beggars, dwarfs are picked up and tossed from windows and starving paupers are fed shit pies.

This phenomenon was not, Dickie argues, simply a persistent holdover from the traditions of a more crude, rural, unsophisticated medieval age.  Not only was ‘jesting’ alive and well in the Tudor period, but it was vibrant, popular, and flaunted in the 18th Century.

Dickie also insists that 18th-century jestbooks weren’t just blasts from a barbarous past. They were produced in greater numbers than ever, replenished by new material that statistically outweighed the old. With their pointedly contemporary settings and reference points, their topical jokes about London theatre, parliamentary business and the latest fashions, many went out of their way to flaunt their modernity.

Nor were these jestbooks popular only with the lower classes – to Falstaff and his cronies at the Boar’s Head Tavern – but to the aristocracy as well, the very class that produced out of its ranks the achievements of the Enlightenment:

The content [for upper class readers] were conspicuously upmarket productions, well printed on good paper, decorated with engraved frontispieces and rococo ornaments, and priced so as to exclude all but genteel readers with disposable income. The content matched the price point: uppity tailors bilked by fashionable clients, dim footmen humiliated by boorish sparks, the shiftless poor getting their comeuppance from high-born pranksters. Evidence survives in sale catalogues, library stamps and personal inscriptions of strong demand among the elite for works of this kind. They were consumed not only by dilettantes or libertines, like Horace Walpole, John Wilkes and James Boswell, but also by landowners, clerics and society hostesses – Hester Thrale, Samuel Johnson’s confidante, owned several jestbooks and comic miscellanies.

The most Dickie allows himself is a shudder of donnish distaste: ‘One wonders how anyone could have laughed.’ Yet laugh they did. The thriving subgenre of ‘ramble novels’ with titles like Adventures of a Rake and Memoirs of the Noted Buckhorse has none of the subversive richness of Darnton’s libertine bestsellers, and most are no more than episodic vehicles in which a boorish prankster-hero causes havoc and inflicts humiliation wherever he goes. Far from avoiding these novels, elite readers went at them with relish

The jestbooks and their sexual humor and rape jokes were popular with men and women alike.

Women not only consumed but energetically produced jokes about victims enjoying rape or being humiliated in court. Jestbook assumptions are central to works like Lady Mary Wortley Montagu’s ‘Virtue in Danger’, a sarcastic ballad on a real-life society case of 1721, and to the startling premise of Eliza Haywood’s novel of 1727, The Lucky Rape. Decades later more decorous women writers were still using the basic tropes of misogynist humour. Comic scenarios about scheming maidservants and bogus chastity were routine in the novels of Charlotte Lennox, who once acted on her feeling that hussies were there to be beaten, and had to defend herself at the Middlesex Sessions. Even Jane Austen said of a neighbour’s late-term miscarriage: ‘I suppose she happened unawares to look at her husband.’

Finally, even disabled writers enjoyed writing humorously about their deformities or disabilities:

Some of the most hostile mockery of disability came from writers who struggled with it themselves. Fresh from a stage lampoon of Swift’s one-legged bookseller George Faulkner, the actor-playwright Samuel Foote fell from his horse and lost a leg, provoking sly jokes from Johnson about ‘depeditation’ and ironic consolation poems with missing (metrical) feet. Foote replied with a new comedy, The Lame Lover, and took the title role, Sir Luke Limp, himself. Lady Mary Wortley Montagu, disfigured by smallpox, traded insults in print with Pope, whose body – or, as she put it, ‘wretched little Carcass’ – had been stunted and twisted in infancy by Pott’s Disease. Christopher Smart, whose Jubilate Agno memorably deplores the vilification he received as a supposed lunatic – ‘For silly fellow! silly fellow! is against me’ – was an indefatigable collector and disseminator of deformity jokes. The famously hideous actor-manager Theophilus Cibber turned his ugliness into a lifelong performance, hamming it up as Pistol, Abel Drugger and the role devised for him by Smart, Mynheer Von Poop-Poop Broomstickado

So, what was going on? In an early book on the psychology of laughter, the author suggests that laughing at the deformed is an affirmation of superiority:

Why is mimicking a person or an animal ludicrous? Because the imitation is of something which is regarded as inferior. We do not laugh at the perfect imitation of a beautiful song, nor do we ridicule the perfect imitation of a human figure whether sculptured or painted, but we laugh at defects, at the representation of awkwardness, of clumsiness, and silliness. In mimicry it is not simply the imitation of any kind of gestures, or of action, or of mannerisms, or of speech, that is regarded as ludicrous, but it is only certain definite manifestations, only certain motor activities or postures that excite laughter. The imitation in mimicry excites our laughter because the gestures, postures, speech, and phrases imitated are considered as silly, senseless, stupid (Boris Sidis The Psychology of Laughter 1913)

Physical deformity was thought to be an indication of moral depravity:

When we mimic persons and their modes of behavior it is to bring out in the language of gestures the moral and mental inferiority, the inner senselessness of the person (Sidis)

Other humor researchers suggest that there is more to humorous disparagement than feelings of superiority:

A second class of humor theories, whose roots lie in classical Greek and Roman rhetorical theory, includes those theories of humor based on malice, hostility, derision, aggression, disparagement, and/or superiority. Included in this group are ethnic, racial, and "dumb" jokes. Scholars, theorists, and researchers who espouse theories of humor based on hostility or malice frequently cite the similarities in bodily positions between
aggressive behavior, such as fighting, and laughter to substantiate their claims (Amy Carrell, University of Central Oklahoma 1998)

Igor Krichtofovitch (Humor Theory 2005) agrees with additional insights:

And don’t most of us experience intense euphoria when a well-placed joke puts our opponent in a funny, unfavorable, frequently demeaning position? Moreover, to do this it’s not at all necessary to demonstrate your real mental superiority. The power of the joke is that it does not necessarily have to be well-argued. Its purpose is to psychologically elevate the joker over his rival, and to place the latter in a foolish position. An important and irrefutable observation to which we will refer many times is the fact that the joker and his target perceive the joke, especially a particularly offensive one, entirely differently. The victim, as a rule, is not up to laughing. And this once more speaks to humor being a type of a weapon in the battle for social status.

According to the theory of psychoanalysis, in certain situations, humor and its derivative laughter play to the aggressive behavior of groups. S. Freud noted that for the tendentious humor, three persons are needed: first, someone who uses laughter (wit); second, a target for aggression; and third, someone who receives the goal of laughter (wit) - the extraction of pleasure (‘I’ and ‘It’).

Freud also supposed humor to be one of the manifestations of instincts – sexual and aggressive. According to Freud, humor is as much a means of the attraction of the female as the magnificent tail of the peacock or the bright comb of the rooster.

There are at least 1000 more citations on humor theory, but a sampling of them show that there is general consensus of the obvious – we laugh at deformity because we are glad we do not look that way, find deformity a caricature of normal life and therefore funny, and have a natural tendency to marginalize ‘the other’.

A simpler theory is that some things are simply funny:

“It seems surprising that people laugh at the misfortune of others. For instance, a man is walking down a winter street, slips, wildly flails his arms, and finally falls. The reaction of the spectators is varied, but after the victim stands up and sheepishly brushes the snow off his clothes, the majority of the on-lookers smiles or laughs – the incident turned out to not be serious. The fall itself turned into a comical event, breaking the monotony of the rhythm of everyday life.”

With this example, Dmitriev (Russian humor theorist) supposes that “the spectator relaxes (nothing grievous or dangerous has happened!) and begins to laugh.”  (Krichtofovitch)

If any of these theories are accurate, then we are no different from the citizens of the 18th Century.  We moderns all laugh at the same deformities, differences, and distortions of life as our ancestors.  We just do it internally instead of externally.  Most of us tell the ‘racial, ethnic, and dumb jokes’ referred to above, but save them for friends.  Given the times, we are less likely to tell the longer joke (“A woman and a dwarf walked into a bar….”) and give offhanded one-liners; but they are still jokes ‘at the expense’ of someone else.  Most of us will have to admit that it feels good, in the current atmosphere of Political Correctness to tell these jokes, make these cracks, and laugh at them. 

While one conclusion is obvious – people have laughed at deformity, sexuality, and perversion for millennia and laugh for the same psychological and sociological reasons – the other is not.  We have not progressed from the 18th Century as many ‘Progressives’ would have us believe.  We have not achieved a cultural superiority thanks to a modern enlightenment and the new understanding of social dynamics.  We are the same human beings with the same human nature and psychological and social needs as the Romans, Greeks, and probably the cavemen long before them.  We have only decided to repress and submerge our natural inclinations for the sake of an idealistic view of society.

Does that make the inclinations go away?  No.  Nor does increased tolerance for “disparagement humor” mean that individualized attacks of ridicule should be condoned.  They should not; but listening to comedians make us laugh at the very distortions we laugh at in private would only be admitting the truth about ourselves. Laughing at others is no one-way street, for it means tolerating the laughter pointed at us.  No one is immune from pointed jokes; and in a way this openness and self-generated tolerance may be a better way to promote real acceptance of everyone.

Monday, July 30, 2012

Radicals And The Left

Sean Wilentz, in his review of Michael Kazin’s book American Dreamers: How The Left Changed A Nation, discusses the role of radical politics and the impact they have had on American society  http://www.nybooks.com/articles/archives/2012/aug/16/left-vs-liberals/?page=2

For Kazin, the left consists of anyone who has sought to achieve, in his words, “a radically egalitarian transformation of society.” The definition embraces an enormous array of spokesmen and causes, and Kazin’s account runs from the abolitionists and workingmen radicals of the Jacksonian era through a succession of socialists, women’s suffragists, Greenwich Village bohemians, and civil rights protesters, down to today’s left-wing professoriat.

The radical Left has succeeded far less than is commonly sought in achieving these goals largely “because it has often been out of touch with prevailing values, including those of the people they wish to liberate. He concludes that American radicals have done more to change what he calls the nation’s ‘moral culture’ than to change its politics.  The rap on these radicals is that they were so confident of the righteousness of their ideas, that they underestimated the influence of those who were more cautious or rejected them outright.  As importantly, because these ‘American dreamers’ had little political acumen or even the desire, patience, and persistence to push their ideas into policy, they were more often than not coopted by the liberal establishment.  This liberal elite in their view watered down their vision and marginalized those who originated and proposed it.

Kazin argues that the liberal components of the governing elite have supported major reforms strictly in order to advance purposes of their own. Abraham Lincoln and the Republicans, he writes, embraced emancipation only halfway through the Civil War, when it became clear that doing so “could speed victory for the North” and save the Union, their true goal. Franklin D. Roosevelt endorsed labor’s rights only when he needed to court labor’s votes.

Even when they are successful, Kazin writes, the radicals—“decidedly junior partners in a coalition driven by establishment reformers”—end up shoved aside as the liberals enact their more limited programs and take all of the credit. Prophets without honor, the leftists return to the margins where they and later radicals dream new and bigger dreams until another social movement jars the establishment.

Governing elites by their very nature have the power, money, and authority to coopt what they want and take credit for it; and to reject, marginalize, or discredit what they do not; and only accept ‘radical’ ideas when their interests and the temper of the times coincide with them. 

Perhaps more importantly and what neither author nor reviewer acknowledge is that there is rarely anything radical.  More often than not, what radicals of either Left or Right propose as unique, has historical precedent.  Neither emancipation nor abolition was a new idea in 1864:

The Spanish government to enact the first European law abolishing colonial slavery in 1542.  In the 17th century English Quakers and evangelical religious groups condemned slavery as un-Christian; in the 18th century, abolition was part of the message of the First Great Awakening in the Thirteen Colonies; and in the same period, rationalist thinkers of the Enlightenment criticized it for violating the rights of man. The Somersett's case in 1772, which emancipated a slave in England, helped launch the British movement to abolish slavery.

Revolutionary France abolished slavery in 1789; Haiti achieved independence from France in 1804 and brought an end to slavery in its territory, establishing the second republic in the western hemisphere. Britain banned the importation of African slaves in its colonies in 1807, and the United States followed in 1808. Britain abolished slavery throughout the British Empire with the Slavery Abolition Act 1833. (Wikipedia)

It took so long for this ‘radical’ idea to be put into practice because of the complex cultural, social, political, and economic factors governing a divided American society.  While he believed in the rights of man and lamented the deprivation of those rights through slavery, his political position had to evolve and mature through a series of justifying arguments – Constitutional, moral, religious, and political.  More than anything else his commitment to Union deferred his Emancipation Proclamation. 

In some cases, radical ideas were neither slow in maturing or coopted by the political center, but simply idealistic, unrealistic, and disastrous expressions of zealotry. Radical Reconstruction, put in place by Congressional Republicans desirous to once and for all punish the South, disable it, and right the wrongs of a century, should have known that the suddenly disinherited plantation owners, seeing their former slaves cavort in State Legislatures, would take it lying down:

After the Civil War, for example, various radicals tried to move beyond emancipation to ensure full economic as well as political equality for ex-slaves. Most of the ex-slaves, however, hoped that Reconstruction would provide, Kazin writes, “a chance to exercise the same rights white citizens had long taken for granted”—hopes that were “hardly revolutionary,” that aimed “to fulfill the promise of liberal capitalism.” As it turned out, ensuring even basic civil and political rights for Southern blacks required extensive federal force that secured a restive interracial democracy in the South until a violent counterrevolution by Southern whites overthrew Reconstruction in the 1870s.

Even before the war, the radical Abolitionists did Lincoln more harm than good, and he, trying to balance the anti-slavery sentiments of the North, the rebellious South, and his desire for Union above all, had more fights with these Northerners than the most cantankerous Southerners.

Radical influence in the New Deal was far less important than usually thought.  The long Depression had taken its toll.  America in the early 30s was still rural, sparsely-populated, and untouched by government.  The Depression wiped out private savings and the culprits were clearly Wall Street, speculators, reliance on margin and living beyond one’s means.  Left with nothing and disillusioned with the private sector, the American public turned to Big Government – the only American institution with money or the power to print it.  Roosevelt’s ideas were not radical – they were logical, historical derivatives.  Yet Kazin insists on casting the era within the misleading framework of radicalism vs. liberalism:

Kazin understands that liberal reformism has existed independently of radical agitation—he cursorily calls the New Deal reforms “liberal achievements,” and mentions a stillborn liberal “new age of reform” in the 1960s—but his book chiefly makes liberalism’s ideas seem like weaker versions of the radicals’ ideals, advanced as responses to the radicals’ protests.

Perhaps as importantly, neither author nor reviewer places Leftist radicalism within the larger context of history.  While one might have lauded Roosevelt and his reformers in 1933, many of his programs have been either discredited or viewed as only temporary solutions to immediate problems.  The radical agenda – ‘creating a radically egalitarian transformation of society’ had salience in the Depression, because everyone was equal, but poor; and what better time than then to raise all boats?  While many of the programs were necessary then and are in force today (bank deposit insurance, bank regulation, Social Security, Fair Labor Standards, etc.), the encouragement of unionism, reliance on public sector social programs, and the consolidation of federal power are looked at much more circumspectly now.

There was nothing radical in the idea of civil rights either, for this, too had been a subject of debate during the Enlightenment, which in turn took its inspiration from Classical Greece and Rome; and more recently in the debate surrounding the writing of the American Constitution.  The de jure emancipation of the slaves (Constitutional Amendment) was enacted in 1865, but there were few in either North or South between then and the de facto emancipation of 1965 (Civil Rights Act) who doubted that black people were politically equal to whites.  Women’s suffrage was made the law in 1867 and debated long before that.  In the United States women were considered unequal until the passage of the 19th Amendment  in 1920; but again the idea had to mature in the general population until the 1960s. 

In both cases, the ideas behind the ‘radical’ movements were not new at all.  As importantly, current social, political, and demographic factors were far more important to the activist movements than any individuals.  Abby Hoffmann, Mario Savio, and Mark Rudd were facilitators, but the real force behind the civil rights movement of the Sixties was The Baby Boom.  There were more twenty-somethings alive at the time than ever before or since.  These Americans grew up in the repressive Fifties, by the end of which time authoritarian social rule began to weaken as the economy rapidly grew, social and geographic mobility increased, exposure to Europe and other countries became possible. Education became less a means to an end, characteristic of Depression-era parents, and more an end unto itself.  That is, students had the luxury of taking political philosophy seriously and thinking about moral and ethical principles as they applied to America.

A major omission of both Kazin and Wilentz is any reference to the radical right which has had its own share of influence.  Ronald Reagan’s challenge to big government in the early 80s certainly changed the American landscape just as profoundly as Roosevelt and the New Deal. His muscular defiance of the Soviet Union, risking nuclear confrontation, was a radical departure from the policies of co-existence and put the final nail in the coffin of Leftist love of Soviet ‘egalitarianism”.  To be sure, and consistent with my arguments about the influence of the left, both ideas were not radical and their times had come.  More and more Americans were seeing the failure of Great Society programs and their tax dollars going into the pockets of the unsupervised managers of them.  The war in Vietnam soured national faith in government, and Jimmy Carter espoused the worst negative, defeatist attitudes of Washington.   The Soviet Union by the time of Reagan’s challenge was collapsing, imploding, and near its end.  Reagan’s stance in the context of that dissolution was not radical, but inevitable and good politics.

I agree with Kazin’s conclusion that Leftist radicalism was more bark than bite, and that there were many determining factors other than the supposed visionary perception of radical reformers.  There is no doubt that individuals and ideas play a role in societal change.  In the popular democracy of America, we cannot rely on the general public to have any new or great ideas; and thus we rely on those with them to speak out.  Although all the factors enabling change may be in place, it often requires someone with charisma to ignite the fire.  Just don’t take too much credit is all.

Sunday, July 29, 2012

Poverty And Why We Can’t End It

Poverty in America continues, and while it has been higher – 15 percent in 1983 – it now stands at 11.3 percent, the second lowest rate on record and only a fraction over the very lowest since records have been kept (11.1 percent in 1973). So it isn’t that poverty has increased as a percentage of the population; the number of people in poverty has increased because of population growth. 

While the total number is important, it is the percentage rate that really tells the story.  First, the fact that the proportion of people in poverty today is near its lowest in 40 years means that both Democrats and Republicans share the responsibility.  Both Clinton and George W. Bush presided over America during years when the poverty rate was higher than it is now.

Second, even though the country has radically changed since 1973 – greater number of immigrants, more participation of women in the labor force, fewer manufacturing but more high-tech and especially service jobs, etc. – the poverty rate has not.

Third, and perhaps most importantly, given the media attention attached to it, the poverty rate has remained relatively low despite the increasing income and wealth disparity in the country.  Despite the brouhaha, the concentration of wealth is not contributing any more to poverty than it has – if it has – in the past.

Peter Edelman tries to analyze these and other factors in today’s (7.29) New York Times http://www.nytimes.com/2012/07/29/opinion/sunday/why-cant-we-end-poverty-in-america.html?ref=opinion.  Unfortunately he stresses the importance of those government programs – Medicaid, Social Security, and Food Stamps – which keep people from falling into more extreme poverty than he does suggesting how to generate wealth and income among the poorest Americans. He suggests a number of reasons why poverty persists; but rather than address the structural issues which underlie them, he makes implicit assumptions about government failure.  In his view, it is the responsibility of government to raise people out of poverty just as it has prevented their further fall:

Why have we not achieved more? Four reasons: An astonishing number of people work at low-wage jobs. Plus, many more households are headed now by a single parent, making it difficult for them to earn a living income from the jobs that are typically available. The near disappearance of cash assistance for low-income mothers and children — i.e., welfare — in much of the country plays a contributing role, too. And persistent issues of race and gender mean higher poverty among minorities and families headed by single mothers.

The question is why do people work at low-wage jobs and what can be done to raise family income?  First of all up until 2010 the number of illegal aliens in the country has been approximately 11 million.  While this is approximately only 3 percent of the total US population, illegal immigrants make up 5 percent of the workforce (NPR, 3.7.06 report).  These illegal immigrants have been willing to work for the minimum wage or below, thus forcing down overall wages in certain industries.  Recently legal immigrants, many with little or no English and education are unlikely to demand higher wages and benefits. Until these illegal and legal immigrants return home (this is already happening for Mexican immigrants), their wages will remain low.

Second, while there are are higher-level jobs available, especially in the high-tech industries, rapidly increasing as a proportion of GDP, employers find few qualified American applicants.  Major corporations like Apple, Microsoft, and Intel have gone on record lamenting the skilled labor shortage in American.  The public education system is broken, few children graduate at or above grade level, and few have been given the risk-taking and entrepreneurial skills demanded by competitive business.  A large proportion of the black population still lives either in poor, dysfunctional neighborhoods or in more abject poverty in the South.  Unless these systemic problems are more directly confronted by community leaders, rejecting the corrosive tradition of entitlement and focusing on achieving majority American norms, residents of these neighborhoods will remain unemployed or employed in low-wage jobs.   Charles Murray has recently written a book (Coming Apart) which chronicles the plight of the white rural underclass and suggests that it suffers from the same dysfunction as the urban black, especially the breakdown in family structure and consequent loss of majority norm values.

Edelman cites “persistent issues of race and gender” as contributing factors to poverty, and once again suggests that it is government’s job to address them.  Gender is a non-issue in 2012.  While the glass ceiling no doubt exists in certain professions, the recent appointment of a young woman as CEO of Yahoo is but one example of talent trumping sex.  The proportion of women in law school, medical school and other educational institutions is in many instances higher than men.

Race is an issue, of course, and while landmark Supreme Court decisions have guaranteed de jure equality and desegregation, reality if far from those ideals.  Crossing the Anacostia River in DC is like entering a Third World African country.  All DC wards across the river are over 90 percent black, predominantly poor, with social indicators far below the norm. 

Fifty years of social programs have made very little dent in minority poverty, employment, incarceration, and health rates.  While there is no doubt that if America could solve the problem of the inner cities, it would have done it by now.  Edelman gives no answers because he cannot; and he, like the rest of us, have spent the last five decades stumbling over this conundrum.

Only one factor related to poverty mentioned by Edelman – more restrictive social programs – is clearly and directly in the hands of government; and yet there are strong reasons for their limitation.  Welfare programs, beginning with Bill Clinton, have become less permanent features of poor communities than the temporary investments envisaged when they were created.  A de facto permanent dole discourages social mobility, job searches, and income prospects.  Welfare today is becoming more efficient, but is still far from the type of intervention that can enable people to rise out of poverty that it was originally thought to be.  Social Security is part of a safety net with wider spaces in it, but it is the more productive model of better education – greater adherence to majority norms - better jobs - higher wages - more savings that should be considered before expanding government welfare.

Edelman concludes with a litany of ‘progressive’ solutions:

We know what we need to do — make the rich pay their fair share of running the country, raise the minimum wage, provide health care and a decent safety net, and the like. But realistically, the immediate challenge is keeping what we have. Representative Paul Ryan and his ideological peers would slash everything from Social Security to Medicare and on through the list, and would hand out more tax breaks to the people at the top. Robin Hood would turn over in his grave.

While reforming health care is necessary and should be a priority, the other items on his list are not.  ‘Fair share’ is not easy to define when the economic contributions of the wealthy are considered.  Raising the minimum wage, according to many business owners will depress job creation.  The definition of ‘a decent safety net’ is also unclear, given my observations above; and the ‘slashing’ of Medicare and Social Security, i.e. reassessing both payments and benefits for rich and poor – is absolutely necessary in a society which has shunned taxes. 

Poverty is a structural issue, not one that can be solved by government programs. 

Saturday, July 28, 2012

Playing God–Synthetic Biology

Adam Rutherford wrote about synthetic biology in the Guardian (7.27) http://www.guardian.co.uk/commentisfree/2012/jul/27/synthetic-biology-playing-god-vital-future about scientific breakthroughs in genetic modification (GM) and the particularly strident criticisms that have come from many quarters concerning this innovative and far-reaching technology.  For years critics and lay people alike have resisted any kind of modification in plant or animal DNA, expressing fears that once the ‘real’ thing disappeared and was replaced by a human-tinkered product, the planet would be doomed. 

Dracula corn, Mephistopheles wheat, and poor Dolly the Sheep would reproduce and spread quickly and completely; and when some glitch in their recombination inevitably showed up, we would have no way to repopulate the land with good, old-fashioned, American plants and animals.  This, of course, with no regard to the benefits realized from GM  such as the rapid production of insulin, insect resistant wheat, fast-growing rice, and a whole host of other improved plants, animals, and medicines.

Environmentalists, religious figures and sections of the media regularly use the phrase as a handy stick with which to beat those in the field. Scientists, they claim, are foolishly meddling in matters that should be left to the gods or nature.

That accusation has been made in attacks against many of the major scientific advances of the modern era, including Watson and Crick's description of the structure of DNA in 1953; the birth of the first IVF baby, Louise Brown, in 1978; the creation of Dolly the sheep in 1997 and the sequencing of the human genome in 2001. In all these scenarios, it's not clear exactly what "playing God" actually means.

It is not hard to imagine the fear and outrage caused by synthetic biology.  Not only are scientists tampering with ‘the real thing’, they are creating a totally artificial ‘thing’, letting loose into the environment even more distorted, deformed, and ghoulish varieties of things we consume.

If there were those who said we were playing God when it came to recombining DNA, substituting a few fragments of one organism for those in the nucleus of e-coli, there are legions more who are convinced that the human race has finally, once-and-for-all, crossed the heretical line dividing God’s prerogatives and ours.  At the same time, they ignore the benefits:

Researchers in California, for example, have created synthetic circuits for yeast cells that produce a chemical called artemisinin, a key anti-malarial drug. This will be cheaper than getting it from the plant Artemisia annua, the current production method.

Nasa is investigating ways to create bacteria that counter the effects of radiation sickness in astronauts. Meanwhile, a US–Swiss group has engineered a genetic circuit designed to detect and destroy cancer cells without inflicting the unintended damage caused by chemotherapy and radiotherapy.

Synthetic biology is but the latest in a series of human attempts to ‘play God’:

Yet there is almost no aspect of human behaviour that isn't some form of manipulation of the environment for our own purposes. Farming, which we've been doing for more than 10,000 years, is quite the opposite of natural. Breeding, known scientifically as "artificial selection", is the process of mixing genes by design to engineer cheap and plentiful food.

Detractors use the phrase "playing God" to provoke emotive opposition without defining what it is about synthetic biology that is qualitatively different to the previous advances that they enjoy and benefit from every day. Should we go back to the time before humans started playing God through their development of sanitation, vaccines and measures to counter widespread child mortality?

This is a very simple and elegant argument in favor of scientific achievement and progress.  Human beings, because of our evolved state of intelligence, curiosity, enterprise, and self-preservation, have always looked for ways to change the environment in which we have lived and always will.  As importantly, there has never been a scientific discovery which offered significant benefits to humanity that has been rejected and buried for moral, ethical, or religious reasons.  Scientific discovery has a life of its own, particularly in a country like America where we value progress and human betterment overall.  Heresy and apostasy are things of the past, and burning people at the stake for antisocial behavior and novel ideas is finished; but the inheritors of 17th Century New England are still alive and well and still want to burn – figuratively, of course – those who want to ‘play God’.  Stem cell research is currently in their cross-hairs; and once they figure out what geneticists are doing, they will add their fundamentalist voices to the liberal ‘progressive’ environmentalists on the Coasts.

Our social and philosophical conservatism is everywhere.  Virtual reality is still seen only within the context of adolescent computer games; but when these same conservative fundamentalists get wind of the eventual brain-computer link which will allow us to live exclusively in a computer-mediated virtual world where any combination of fantasy, reality, and history becomes the coin of the realm, the opposition will be incendiary.

All to no avail, of course, because scientific discoveries are both the result of consumer demand and shapers of it.  Scientists will develop virtuality because we want it; and each new technological advance will stimulate more demand.  The same with synthetic biology.  Eventually we will be able to create totally synthetic human beings modeled and crafted according to our own very human vision.  Will they still be human?  Of course.  The transformation will be progressive and incremental; and at each stage of development, there will be a brief firefight between religion and practicality.  Practicality – satisfying consumer demand – will always win.

Thursday, July 26, 2012

Walmart Is Good For You

Walmart does what American consumers want – provide quality goods at rock-bottom prices.  This enables people at the lower end of the economic scale to free disposable income for other expenditures such as health, child care, and education.  Despite ill-placed community opposition to Walmart expansion into underserved, poor urban areas, these new stores will be a boon to residents who have always been hostage to overpriced small businesses which, in order to stay in business, operate on the margins – passĂ© food, products near their pull-by dates, and low-nutrition staples with high profit margins.

There is no doubt that these small enterprises will suffer when Walmart enters a neighborhood, but such has been the way of small business for 100 years. The recent example of Eastern Europe is relevant.  Right after the fall of the Soviet Union, kiosks sprang up on street corners in Bucharest, Sofia, and Tbilisi.  They offered a potpourri of items – a bottle of whisky, hairpins, and a few ballpoints.  As incomes rose and demand grew, these kiosks were replaced by small stores which offered a greater selection at a better price.  Eventually these small stores gave way to mini-marts and super-mini-marts which offered even greater selection at lower prices.  Even France, despite untenable subsidies to support small business, has gone the way of the countries around it and supermarkets are in every neighborhood.

Walmart-bashing has been going on for years, despite the company’s reforms.  While it has been attacked for eliminating health care benefits for part-time workers, it maintains a reasonable plan for full-time workers.  While many businesses have taken this step, it is: a) good and common business practice to reduce costs where possible, especially in a recession; and b) the problem is with the American health care system, or lack of it.  If we had universal health coverage to which all Americans contributed and from which all benefitted, the cost to individual businesses – now staggeringly high – would be reduced.  Employees of all categories would benefit and the companies would be more competitive with European firms which are covered by the state.

Part-time workers, far from the exploited masses as they are often described, fall into the ‘involuntary” (work part-time for economic reasons) and ‘voluntary’ (work for avocation, supplement family income, students, etc.).  According to the latest Bureau of Labor Statistics Report (http://bls.gov/news.release/empsit.t08.htm) the number of voluntary part-time workers is over twice that of involuntary workers.

Part-time work is beneficial to both employer and employee.  The employer can hire at low-cost and retain the flexibility to staff up or down depending on the economy and the market.  The part-time employee benefits because he/she can get some kind of employment in a down market, gain experience, and work-related credibility.

Walmart has also been criticized for its low wages, often very little above minimum wage; but it is not alone among businesses trying to cut costs, increase sales, and maximize profits.  Why should Walmart pay employees more than the market demands?  It benefits hundreds of thousands of low-skilled workers who could not get a job elsewhere.  Moreover, despite the immigration hysteria, Latino immigrants like their European counterparts before them work at low-paying jobs where English and urban-type skills are not required, and within a generation speak English, acquire education, and are more employable.  Is economic mobility as fast as can be hoped.  No, but this has more to do with the deficient public educational system, a broken health care system that forces unfair out-of-pocket expenses, and an insatiable consumer demand for cheap products which forces companies to pay low wages. 

In other words, Walmart will rely more on well-paid workers when illegal immigrants return to Mexico (which they are now doing because of the rapid growth rate of that country and the poor economic performance here), when health care no longer becomes a burden, and when disciplined, efficient, and high-productivity workers are the norm not the exception. 

The Guardian article picks on Walmart for yet another reason – abusive practices in the supply chain.  While the customer sees only huge stores with cheap goods, Walmart management sees the supply chain.  It must scour the market for goods at the lowest price at the highest quality; ship them from manufacturer to the consumer in the fastest and cheapest way possible; manage inventory and cash flow as efficiently as possible; and tune labor and capital to a delicate balance.  Of course truckers and warehouse employees are also paid low wages for long hours, but they are subject to the same laws of the marketplace as Walmart – they have turned their backs on corrupt unions and demanded open shops, thus losing the strong-arm protection that they once had; and they are subject to the same vagaries of the marketplace as anyone else. Juan De Lara writing in the Guardian http://www.guardian.co.uk/commentisfree/2012/jul/25/walmart-human-cost-low-price-goods laments:

Reports from warehouse workers and truck drivers in places like Chicago and southern California show how the ruthless pursuit of low-cost goods has also created a downward spiral for workers who distribute consumer goods in the United States.

The ‘reports’, or one article from The American Prospect, has no solutions other than mass protests, community sensitization, government intervention.  The truckers who operate under the same conditions as Walmart employees with low wages and minimal benefits are being exploited; and since everything else has failed and the government has not come running to the rescue, the only way to rectify the situation is to resort to old-style union activism:

For now, however, Change to Win's strategy is to enlist the entire community of warehouse workers, perms and temps both, on the theory that a significant disruption of the retailers' supply chains will compel them to come to terms, whatever the workers' classification. It's an all-or-nothing campaign. "You have to build a campaign like the janitors had, that gets and enforces a communitywide contract. We'd want a $15 hourly minimum with health benefits," says Tom Woodruff, who heads the federation's organizing center in Washington, D.C. "You'd have to organize it all -- the whole industry out there. Of course, you'd have to be half crazy to do it." http://prospect.org/article/shipping-point

This of course will never work.  Even if the truckers could be organized in a way that even approximated by a tenth the organization of Jimmy Hoffa, today’s consuming public would never stand for ‘significant disruption of the retailers’ supply chains’. 

What is amazing to me is that with the level of imports flowing into this country, swelling our trade deficit by the hour, these truckers are still working for low wages.  That is, Walmart and their customers depend on, rely on the movement of freight.  Who is responsible for the current shift in balance of power from labor to management?  Is it the ‘predatory’ practices of businesses alone? Or do labor and the American consumer share in the cast of the phenomenon? 

Walmart, Nike, Apple and other big multinational corporations have been unfairly attacked for what have been called ‘substandard’ labor practices in China and other countries in Asia.  It is the governments of these countries – and, by the way, China is a big boy now – who set these standards and must balance workers rights and benefits against the rigors of international competition.  If China doesn’t offer Nike a good deal, the company will go to Indonesia.  If the standards are lower there, whose fault is that?

The third force in this calculus is the American consumer.  Successful marketing campaigns have been developed to shame Nike, Apple, and others into assuring ‘fair, just, sustainable’ products, and they have renegotiated contracts with their suppliers and subcontractors to accommodate this concern. Walmart has been somewhat immune from this lobbying because of two factors – first, it subcontracts with tens of thousands of suppliers all over the globe.  Even the best-meaning social activist cannot possibly track labor conditions in all of these.  Second, brands like Apple and Nike are high-end product manufacturers.  If I am paying $200 for a sneaker, then I want to be sure it is produced properly and I don’t mind paying a few cents more.  Walmart is just the opposite – it offers thousands of low-cost products and makes its profit through volume.  In other words a moving target on two fronts.  So it continues to negotiate legal contracts in countries who alone are responsible for the welfare of their workers.

The Guardian article argues against itself in the following passage but inadvertently raises an important point:

Consuming mountains of cheap goods has also produced huge environmental costs, particularly for poor communities located close to distribution hubs. According to the California Air Resources Board, an estimated 3,700 Californians die every year because of medical issues related to the diesel ships, trucks and trains that deliver the goods we consume.

Which way do you want it? Truckers working at higher wages because of increased volume of traffic, belching tons of pollutants into the air? Or depressed demand, lower wages, cleaner environment?

The inadvertent point is that the American consumer, often thought of as the innocent receiver/purchaser of goods, is in fact the prime culprit in environmental pollution and low wages.  We want a kaleidoscopic array of products; and we want continual upgrades, novelty, and change.  We want all this as cheaply as possible, so we support the federal subsidies which keep water cheap, roads and gasoline cheap, energy cheap.  We don’t really care about low-wage workers – at least those of us in Washington or other corridors of power – and want what the consumer wants…And, of course, the shareholder.

The Guardian writer concludes with the most idealistic, pie-in-the-sky, faded ‘Progressive’ litany of idle hopes I have seen in an otherwise serious newspaper:

All of this proves that our seemingly individual and private decisions to buy cheap products can generate huge social costs. But history tells us that blue-collar warehouse work doesn't have to mean poverty wages and bad working conditions. In fact, workers have often turned blue-collar jobs into decent employment by organizing for safe conditions and fair pay. As consumers, we can help these efforts by demanding that Walmart and other retailers adopt ethical labor and environmental standards, for both foreign suppliers and domestic distribution workers. Walmart certainly isn't the only company to employ poor hiring practices, but its massive global reach and sophisticated distribution innovations have made it a powerful force in the global goods movement industry.

Tuesday, July 24, 2012

American Pragmatism And The Enlightenment

Michael Roth has written http://www.washingtonpost.com/opinion/america-the-philosophical-by-carlin-romano/2012/07/21/gJQAMANR0W_story.html  a review of America The Philosophical by Carlin Romano, a book which focuses on American pragmatism.

Carlin Romano has a story to tell about philosophy and about America. Romano, a critic for the Philadelphia Inquirer and the Chronicle of Higher Education, relates how philosophy long ago took the wrong path by seeking ultimate Truth, and how this quest has led academic philosophers to become increasingly detached from the concerns of just about everybody else. While philosophy pursued purity, American culture in the last century became ever messier — more heterogeneous, dynamic and difficult to categorize. Then, as the white, Protestant, elite culture broke down and diverse groups found their ways into universities and media networks, some philosophers and most of the culture abandoned the quest for Truth and focused on expanding the circles of inquiry and discussion.

What interests me most in this pragmatist take on American philosophy is how far we have come from Jefferson and The Age of Enlightenment

The Enlightenment began then, from the belief in a rational, orderly and comprehensible universe—then proceeded, in stages, to form a rational and orderly organization of knowledge and the state… This began from the assertion that law governed both heavenly and human affairs, and that law invested the king with his power, rather than the king's power giving force to law. The conception of law as a relationship between individuals, rather than families, came to the fore, and with it the increasing focus on individual liberty as a fundamental right of man, given by "Nature and Nature's God," which, in the ideal state, would encompass as many people as possible. Thus The Enlightenment extolled the ideals of liberty, property and rationality which are still recognizable as the basis for most political philosophies even in the present era; that is, of a free individual being mostly free within the dominion of the state whose role is to provide stability to those natural laws (University of Alabama, CIS, Birmingham)

Richard Rorty, reviewer Roth’s hero, is the most recent exponent of pragmatism, which is no more than an apologia for post-modernism:

[Postmodernism] affirms that whatever we accept as truth and even the way we envision truth are dependent on the community in which we participate . . . There is no absolute truth: rather truth is relative to the community in which we participate (Stanley Grenz, A Primer on Postmodernism)

Richard Rorty insists that there is no "skyhook" which takes us out of our subjective conditions to reveal a reality existing independently of our own minds or of other human minds…There is no "God’s eye standpoint" that reveals reality in itself.  Each person interprets reality in accordance with his own subjective condition… He emphasizes the social influence upon the individual and his beliefs. Truth… is an inter-subjective agreement among the members of a community… The end of inquiry, for Rorty, is not the discovery or even the approximation of absolute truth but the formulation of beliefs that further the solidarity of the community, or "to reduce objectivity to solidarity. (Dean Geuras, SW Texas State University)

In other words, we have gone from a country that believed in the rational discovery of truth and the laws that govern the world and the societies which comprise it to one in which there are no truths but those defined by society.  Society becomes the all-important element in life, the arbiter of all.  No longer any truths that are self-evident, derived from universal laws and ultimately from God, but from a randomly aggregated collection of individuals. Indeed. 

Yorty, however, is the most extreme of the American pragmatists – as post-modernism is the most extreme perspective on modern life.  Others in the movement, such as William James respect much more the practical and understandable and very recognizable elements of American thought.

Pragmatism is an American philosophy from the early 20th century. According to Pragmatism, the truth or meaning of an idea or a proposition lies in its observable practical consequences rather than anything metaphysical. It can be summarized by the phrase “whatever works, is likely true.” Because reality changes, “whatever works” will also change — thus, truth must also be changeable and no one can claim to possess any final or ultimate truth (About.com)

In his lecture, "What Pragmatism Means," William James explains pragmatism as a method to make sense of everyday experiences, facts, and data. Pragmatism is not an end in itself, but an attitude or way of thinking. There is no end point in pragmatism, since pragmatism is only a method of understanding. Truths are constantly being discarded or modified as they are attempted to be applied to practice. Pragmatism seeks to steer philosophy away from its traditional method of seeking solutions to arguments, to using philosophy as a way to gain a clearer understanding of nature. Since pragmatism is based on practical consequences, all ideas are accepted. Pragmatism utilizes metaphysics, rationalism, empiricism, and religion in order to provide better tools for clearer understanding.(Helium.com)

[James said that] beliefs are considered to be true if and only if they are
useful and can be practically applied. At one point in his works, James
states, “. . . the ultimate test for us of what a truth means is the conduct it
dictates or inspires.” (Reading for Philosophical Inquiry, Introduction)

This is more sensible and very American.  “If it looks like a duck, quacks like a duck, and walks like a duck, it must be a duck”.  No funny business, no metaphysics or getting back to first principles or truth; just good old common sense. 

John Dewey was perhaps the most accessible American philosopher because he applied his theories to the ‘real world’, especially education:

[Dewey claimed that] since problems are constantly changing, the instruments for dealing with them must also change. Truth, evolutionary in nature, partakes of no transcendental or eternal reality and is based on experience that can be tested and shared by all who investigate. Dewey conceived of democracy as a primary ethical value, and he did much to formulate working principles for a democratic and industrial society.

In education his influence has been a leading factor in the abandonment of authoritarian methods and in the growing emphasis upon learning through experimentation and practice. In revolt against abstract learning, Dewey considered education as a tool that would enable the citizen to integrate culture and vocation effectively and usefully. Dewey actively participated in movements to forward social welfare and woman’s suffrage, protect academic freedom, and effect political reform. (Roebuckclasses.com)

Dewey reflected the thoughts of co-pragmatist C.S. Peirce:

Peirce regards pragmaticism (his invented word) as a method of clarifying conceptions. His basic principle is that the meaning of ideas is best discovered by putting them to an experimental test and then observing the consequences. He was especially interested in methodological procedure as evidenced in laboratory sciences. He maintained that the testing of hypotheses by the laboratory experimentation will produced a definite type of experience. Hence the complete definition of any concept is the totality of the experimental occurrences implied in that concept by logical meaning (chrisne.wordpress.com)

The problem with all four – James, Dewey, Peirce, and especially Yorty – is that they miss what Jefferson and his peers understood:  there are such things as universal moral and ethical principles which have served as guides to human behavior for millennia.  While the interpretation of these principles varies by historical era, the principles themselves have not changed.  Jefferson perhaps best expressed his interpretation of these universal principles when he enunciated his thoughts on “life, liberty, and the pursuit of happiness”.  The pursuit of happiness was not some selfish, individualistic enterprise, but a higher goal – one that aimed at the happiness of the larger society of which the individual was a part.  Locke insisted that there was a responsibility of individuals to supersede their venal desires and look to the common weal.   In other words, Locke understood the importance of the individual operating within a moral and ethical framework – one that had not varied since the beginning of history.  In fact, philosophers of the Enlightenment in accordance with their rational study of facts to reveal universal laws looked to history for insights.  They were not very happy at what they saw, of course – depredation, pillage, naked power and expansionism, and the trampling of individual rights at ever turn – and therefore the Declaration of Independence and the Bill of Rights expressed ways to oppose tyranny and despotism

Dewey’s educational approach and Peirce’s scientific inquiry method seem harmless enough.  Who could be opposed to experimentation, trial and error, no a priori judgments, and objectivity?  Yet, both espouse a form of relativism which is troubling.  What schools – and communities at large – need is not more relativism but less.  As in all societies, there need to be certain fundamental anchors or fulcra around which social and individual enterprise is undertaken.  To claim, as many post-modernists do, that there is no such thing as universal truths or one consensual reality, misses the point.  One does not have to believe in God, as the men of the 18th Century did, as the originator of natural and human law, to believe in truths.  Certain principles of moral and ethical behavior – fairness, honor, justice, equality, respect, and many others – do not have to be God-given to be valid.  And valid they have been as regulators/motivators of human enterprise since forever.

It is weird to even put ‘America The Philosophical’ in a sentence.  We are as far from a philosophical people as can be.  Imagine Joe the Plumber sitting down with Pete the Carpenter at a cafĂ© discussing Kant.  I am not sure the men in working bleus do either, but the program Apostrophes, a shamelessly talking-heads intellectual feast of ideas, was one of the most popular on French television for a long time.  OK, this all was before privatization, but viewers could have turned the set off if they didn’t like what was on.

This doesn’t mean that we Americans do not act according to philosophical principles.  We do, but just don’t know it.  The cultural relativists who promote diversity and the inclusion of all gender-race-ethnicity-religion points of view as equals do not think of themselves as American Pragmatists, just ‘doing the right thing’.  They have been motivated by the still-influential post-modernist academic establishment, but school administrators have themselves bought into the very practical approach to an increasingly heterogeneous society.  Such relativist inclusivity is not the only way, as I have argued, but one way.

Talk about old-fashioned! Wanting to return to the 18th Century tops them all; but I am in good company.  The Romans, like Dewey, reformed education; but unlike him, they understood the basic principles of proper behavior and trained their young leaders-to-be in the theories of Cato the Elder – justice, honor, discipline, fairness, equality were important to the ruled and should be important to the ruling.  Sound familiar?  Western Civilization is one relatively unbroken series of applications of and improvement on the democracy of the Greeks and the administration of the Romans.  Anglo-Saxon law and the English and Scottish Enlightenments were perhaps the apogee of this trajectory.

So, it is fine to be pragmatic; but it is also important to be constantly aware of those principles which guide individual behavior within society. 

Monday, July 23, 2012

Who Needs Gatorade? Not Many Of Us

I am always amazed at the number of people sucking down power drinks at the gym.  These are men and women who haven’t broken a sweat, let alone gone into early stages of dehydration.  Or the number of runners on the C&O Canal who do their workmanlike three miles with a cartridge belt of mini-water bottles strapped around their waists.  Or the Washington young professionals, taking hits of electrolyzed, rehydrating, power-boosting, super-liquids before crossing K Street.  The only time most of us need to rebalance our electrolytes is when we are suffering from acute diarrhea.  The Third World Gatorade – Oral Rehydration Solution (ORS) – has been a lifesaver for millions of adults and children who suffer from frequent bouts of severe illness.  I have been one of those sufferers many times.  No matter how careful I was in my 42 years of travelling in the world’s shitholes, I still got sick.

I particularly remember one time when I ate bad eggs in Tbilisi.  In retrospect eating sunny-side up eggs was a very dumb thing to do.  After all, there was only one hour of electricity in those days, and when it was on people used it to power up the water pump and little more.  Refrigeration was not possible.  But Georgia was Europe after all.  Maybe it was only on the farthest cultural and geographic fringes of Western Civilization, but with all those ancient Christian churches, fancy food, and top-quality wine, it was European enough.  It was a sheer delight to forget all the food- and water-averse habits I had picked up in India, Bangladesh, and Africa, and to just eat without worrying. There was nothing like a fresh tomato or a crispy green salad with fresh peppers and onions straight from the garden in Telavi .and a good glass of Saperavi with them.   

I began to feel sick on the plane to London where I was to connect with a BA flight to Dhaka, but the salmonella in my gut were only massing for the attack which would come soon after my check-in at the Heathrow Hilton.  I squirted rice water every ten minutes, and with the volumes of fluid I was losing, I hoped that the old adage about the human body being made up of 90 percent water was true.  I, however, had forgotten to pack my sachets of Oral Rehydration Solution, indispensable in the Third World, and because of the convenience of the pre-packaged product, I had only a vague idea of the right proportions of water, salt, and sugar necessary to correct the electrolyte imbalance, not fuck it up even more.  Still squirting by morning, I decided to make the trip to Dhaka anyway, and cemented myself with Immodium – never a good idea from a medical point of view because while it stopped diarrhea, it prevented the system from eliminating harmful pathogens and all the waste – stripped intestines, bits and pieces of colon and bowel, the last flecks of Georgian kabob.  Nevertheless, it worked, and although I had this increasing bloated feeling in the 12-hour trip, I was not tripping over seatmates to get to the loo.

The diarrhea abated slightly, but never completely in the first week in Dhaka. I had no appetite and resorted to the diet my mother gave me when I had an upset stomach – soft-boiled eggs, tea, and toast.  I ate this three meals a day.  The irony of all this was that I was sick with diarrhea in Bangladesh from bacteria I had, for the very first time ever, not gotten in that country.  My office prepared a cheap Bangladeshi hot lunch every day – three vegetables, curry, and kilos of rice– and my colleagues wondered why I wasn’t eating it. 

“Is this what Americans eat?”, asked one, referring to my mother’s diet.  No, I explained, only when we are sick.  This of course struck Mr. Ahmed as patently ridiculous.  Everyone knew that eggs were a ‘cold’ food and were the worst things to eat when sick, especially with diarrhea.

“I’ll soon be eating your curry and rice”, I said even though the thought of chopped up fish heads in a greasy sauce made my bowels rumble.

The point of all this is that ORS – Gatorade, basically, is a very good and important product in the developing world, and it has saved millions of lives since it went on the market about 20 years ago.  It is not at all necessary for the 99 percent of us who do not get sick from McDonalds or are not competitive athletes.  We simply do not lose enough fluid to mess up the chemical balance of our bodies to require any form of ORS.  Simple water will do.

In an article in The Atlantic http://www.theatlantic.com/health/archive/2012/07/the-controversial-science-of-sports-drinks/260124/ Lindsay Abrams summarizes recent findings on power drinks reported in the prestigious British Medical Journal (BMJ) which concluded that there was scant evidence for the claims made by drink manufacturers, and that the increased popularity of Gatorade and its clones was due to their intensive and misleading advertising:

Cohen (author of the BMJ article) claims that one of the greatest accomplishments of the Gatorade Sports Science Institute, established in 1985, was to convince the public that thirst is an unreliable indicator of dehydration. There is ample evidence of ways in which the experts who propagated this information were funded or "supported" by sports drinks companies, and while this in itself isn't necessarily wrong, she argues that researchers who have conflicts of interest are not objective enough to be writing guidelines, as is the case here. There is no good evidence to support the ideas, for example, that "Without realizing, you may not be drinking enough to restore your fluid balance after working out" (Powerade), or that urine color is a reliable indicator of the body's hydration levels.

Moreover, as I have suggested above, few of us need these drinks at all:

The European Food Safety Authority upheld the claims that sports drinks hydrate better than water and help maintain performance during endurance exercise -- but added that this did not apply to the ordinary, light exerciser. Says Tim Noakes, Discovery health chair of exercise and sports science at Cape Town University, "They are never going to study a person who trains for two hours per week, who walks most of the marathon -- which form the majority of users of sports drinks," and the majority of people at whom sports drinks marketing is aimed.

Perhaps most damning – but entirely logical from a marketing point of view – is the fact that the power drink companies are pushing their products on children who if anything drink too much:

Both GSK (Glaxo Smith Kline) and Gatorade have developed school outreach programs that further the case for sports drink consumption during exercise. Though the Institute of Medicine says that, in children, "Thirst and consumption of beverages at meals are adequate to maintain hydration," studies either directly funded by or involving authors with financial ties to Gatorade make a major case for the need to promote hydration, claiming, for example, that "children are particularly likely to forget to drink unless reminded to do so."

Not only that:

And, of course, there is the suggestion that sports drink consumption among children is contributing to growing obesity levels. Their association with hydration and athletics means they're not thought of as being unhealthy in the way that other sugary drinks, like soda, are (note that Mayor Bloomberg included sports drinks in his super-size ban). Several studies highlight consumer beliefs that sports drinks are healthy, even essential, showing just how far marketers have been able to push exercise science in the support of sports drinks.

Once again this is American enterprise at its best – creating a product that no one needs, developing an advertising campaign which suggests that they do, and cashing in on the physical fitness craze which has hit much of the country.  The guy at the gym or cycling on the canal in tattered Keds and cutoffs is so yesterday.  Not only should you exercise but you should work out with the proper accoutrements – multi-colored Spandex body suit and aerodynamic helmet for riding and a bottle of cold, refreshing Gatorade to replenish your stores.

I used to joke to my young friends who always carried a bottle of water with them from meeting to meeting at the office that they were doing great damage to their kidneys by drinking so much water.  “Imagine how hard they have to work”, I said.  “They weren’t built for this kind of abuse”.

I was wrong, they said.  The water was absolutely necessary to remove the toxins that built up in the system, especially in this polluted, globally-warmed world where bits and pieces of PCBs, polymers, particulate matter, and abraded asphalt, paint, and rubber were ingested every day. 

In fact drinking too much water is a problem:

The journal recounts that hyponotraemia -- a drop in one's serum sodium levels -- has a bad track record of causing illness and death in marathon runners, and that we know that drinking too much water can cause hyponatremia.

Somehow the makers of power drinks have conflated the two issues – not enough fluid with too much fluid. Drink our power drinks, they say, and achieve the proper balance.  However:

But [the BMJ article] then makes the point that sports drinks do not preclude hyponatremia and that there was an article in The New England Journal of Medicine that found no correlation between hyponatremia and the type of fluid consumed.

The conclusion?  Most of us do not need ORS (Gatorade) but have been convinced of their absolute necessity by savvy marketers.  Not only will these drinks replenish our vital bodily fluids, but we will look cool drinking these emblems of fitness.  We have been had.

Sunday, July 22, 2012

Liberal Christianity–The Decline of Mainstream Churches

Ross Douthat has written an article in the New York Times about the decline of attendance in mainstream Protestant churches, most noticeable in the Episcopalian Church which has seen a decline of nearly 18 percent since 2001, the Lutheran Church 15 percent, the Presbyterian Church 15 percent, and the Methodist Church 10 percent (The Christian Century). Douthat wonders why, in this most religious of countries, this has happened; and he wonders whether it is because these churches have become too secular and moved too far from their spiritual roots for most people. http://www.nytimes.com/2012/07/15/opinion/sunday/douthat-can-liberal-christianity-be-saved.html:

The Episcopalian Church has spent the last several decades changing and then changing from a sedate pillar of the WASP establishment into one of the most self-consciously progressive Christian bodies in the United States.

As a result, today the Episcopal Church looks roughly how Roman Catholicism would look if Pope Benedict XVI suddenly adopted every reform ever urged on the Vatican by liberal pundits and theologians. It still has priests and bishops, altars and stained-glass windows. But it is flexible to the point of indifference on dogma, friendly to sexual liberation in almost every form, willing to blend Christianity with other faiths, and eager to downplay theology entirely in favor of secular political causes.

This decline is the latest chapter in a story dating to the 1960s. The trends unleashed in that era — not only the sexual revolution, but also consumerism and materialism, multiculturalism and relativism — threw all of American Christianity into crisis, and ushered in decades of debate over how to keep the nation’s churches relevant and vital.

It is no coincidence, says Douthat, that the precipitous decline in attendance paralleled the sweeping changes in the Church.  Why should people go to a church when their spiritual needs are not met and when secular issues crowd out religious ones?

Selecting the Episcopal Church as the focus for examination may not be entirely fair; for no other church has become so embroiled in controversy.  Its positive stance on the ordination of gay priests has alienated many congregants and enraged others.  It has caused a worldwide schism where African churches – once the mainstay of Church growth or at least stability in the religious world – are threatening secession.  The Episcopal Church has therefore not driven people away because its new secular, ‘progressive’ focus; it has done so because many of its faithful feel – as do the believers of other faiths – that homosexuality is proscribed by the Bible.  The ordination of women was bothersome and troubling for many, but the inclusion of gay priests was unforgivable.  That is, it might not have been the secularization of the Church which so offended, but the more doctrinal issues of the nature of the Episcopal ministry.

Yet is was not this issue alone that was behind the mass abandonment of the Episcopal Church.  As cited above, most mainstream churches which had not provoked their congregations, were suffering the same losses. In an attempt to address declining membership, these churches tried to become more ‘relevant’ and ‘young’, but these reforms only served to drive away more believers:

Yet instead of attracting a younger, more open-minded demographic with these changes, the Episcopal Church’s dying has proceeded apace. Last week, while the church’s House of Bishops was approving a rite to bless same-sex unions, Episcopalian church attendance figures for 2000-10 circulated in the religion blogosphere. They showed something between a decline and a collapse: In the last decade, average Sunday attendance dropped 23 percent, and not a single Episcopal diocese in the country saw churchgoing increase. 

The Episcopal Church looks roughly how Roman Catholicism would look if Pope Benedict XVI suddenly adopted every reform ever urged on the Vatican by liberal pundits and theologians. It still has priests and bishops, altars and stained-glass windows. But it is flexible to the point of indifference on dogma, friendly to sexual liberation in almost every form, willing to blend Christianity with other faiths, and eager to downplay theology entirely in favor of secular political causes.

These two phenomena – secularization of the church and declining attendance – did not affect Episcopalians alone:

Practically every denomination — Methodist, Lutheran, Presbyterian — that has tried to adapt itself to contemporary liberal values has seen an Episcopal-style plunge in church attendance. Within the Catholic Church, too, the most progressive-minded religious orders have often failed to generate the vocations necessary to sustain themselves.

Douthat’s conclusions, however, only partially explain the decline.  For one thing people may not so much be rejecting a more staid, conservative Christianity; they are searching for a more dynamic, spiritually uplifting transformative experience. Evangelical Protestant churches represent over 26 percent of total attendance in the United States, surpassing Catholic churches for the first time ever.

Older, traditional churches are seeing a decline in numbers, but larger, modern mainstream churches, or “mega-churches,” are increasing rapidly, said Kimball.  But while traditional Christianity is on the decline, other types of religion are on the rise. Many people are turning more to spirituality, and Evangelicals, or born-again Christians, are also very common across certain parts of the United States, said Kimball (Huffington Post Online)

These churches which put a premium on high-energy participatory services and emphasize a personal relationship with Jesus Christ have not only grown, but they have reformed in successful ways.  They have not turned secular, but modern while keeping their profoundly religious roots:

But the newest trend in church growth is exemplified by the No. 2 ranked church's cross-country reach. Lifechurch.tv transmits pastor Craig Groeschel's worship services from the church's studio home in Edmond, Okla., to 13 locations, reaching 26,776 people in average weekend worship attendance.

"Multiple sites are the new normal for fast-growing and large churches. Lakewood is the exception. The next 10 all have multiple sites," says Ed Stetzer, director of LifeWay. "They're contemporary, aggressively evangelistic and evangelical and they're moving beyond the 'big box' megachurch model. The best churches have very intentional systems to move people from sitting in rows to sitting in circles (in small groups) to going out and making a difference in the world." (USA Today Online 9.09)

The congregations that do well, Roozen [Hartford Center for Religious Research] says, are participatory, involve lay leadership, and have a "strong, clear sense of their purpose."

These new churches are not simply places of worship, but are active and energetic family ministries.  If you go into any one of a thousand large evangelical churches in the South, you will find every room filled with day-care centers, senior citizen activities, women’s groups, bake-offs, music, and Bible study.  Far from moving off message, these churches maintain their ‘strong, clear sense of purpose’ by linking these family activities to a Christian mission – the strength of the family is paramount.  

Another reason for the decline in attendance at mainstream churches is the general increase in non-believers, a trend which continues to rise. 

The 2008 American Religious Identification Survey (ARIS) found that while 34.8 million U.S. Adults (15.2%) described themselves as "without religion", almost 90% of these answered "none" with no qualifications. Only 1.4 million positively claimed to be atheist, with another 2 million professing agnosticism (Wikipedia)

About 75% of Americans between the ages of 18 and 29 now consider themselves "spiritual but not religious." (Phillip Clayton, LA Times 5.11)

Non-believers tend to drop out more from mainstream churches because these churches satisfy no real spiritual need, nor do they provide a secular environment any more informed or committed than other environmental or social action groups.

Other reasons are cited by The Christian Century:

Worshipers attend less frequently. In addition to tracking weekly attendance numbers, some churches are tracking who actually worships during a month. Many pastors sense that the same individuals are worshiping throughout the year, but that they worship less often.

Aging constituencies. Mainline churches have a disproportionate number of mem­bers age 65 and older. This proportion will only grow more pronounced as the first of the baby boomers reach 65 in 2011.

Fewer younger members. The other side of this dilemma is the failure of churches to reach younger persons. This is particularly true for the smaller churches that constitute a large part of mainline denominations.

Lack of interest in religion. Adding to the challenge of reaching younger people is the fact that the age group in which self-identified adherents of "no religion" are found most is 25-34. Additional indicators of decreasing interest in church life are found in the General Social Survey 2008: fewer people report going to church "several times a year" and more people report going "once a year." Fewer report going "less than once a year" while many more report going "never." In fact, the attendance category that has grown the most since 1990 is "never."

An interesting take on the decline has to do with the increasing indifference of women who have always outranked men in terms of church membership and participation:

Since 1991, the percentage of women attending church during a typical week has decreased by 11 percentage points to 44 percent, the Barna Group reported Monday (Aug. 1).

Sunday school and volunteering among women also has diminished. Two decades ago, half of all women read the Bible in a typical week — other than at religious events. Now 40 percent do.

The survey also found a marked stepping away from congregations: a 17 percentage increase in the number of women who have become “unchurched.”

“For years, many church leaders have understood that ‘as go women, so goes the American church,’” wrote Barna Group founder George Barna, on his website. “Looking at the trends over the past 20 years, and especially those related to the beliefs and behavior of women, you might conclude that things are not going well for conventional Christian churches.” (Paige Lawler, University of Oklahoma 5.1, Religion News Service)

These trends, although logical and predictable, are troubling.  The mainstream Protestant churches served another important need, not mentioned in any of the literature – they served as the institutional anchors of the community.  They were the religious counterparts of banks, schools, and Rotary.  They stood for a certain moral rectitude and social integrity.  They all emphasized family, community, and principle.  The new evangelical churches with their charismatic and Pentecostal focus on the personal relationship with Jesus Christ are less concerned with community in the traditional broad sense and more focused on communities of like-minded believers.  There is little ecumenical about them.

As importantly, evangelical churches with their focus on emotion and personal dynamics have lost the more intellectual traditions of older churches.  Catholicism, for all its modern twists and turns, lurching between medieval ritual and hand-shaking, still relies on the logic of Augustine and Thomas Aquinas.  Biblical exegesis is common in many Methodist churches.  They are, at least in principle, open to logical analysis and conclusion not only of doctrine but secular issues which relate to it.  There is no such intellectual tradition or discipline in the evangelical churches – and as mentioned above, this is their draw, their allure.  At the same time the collusion of religion and politics is far easier within their confines than in the mainstream churches, a dangerous development. 

Douthat concludes:

What should be wished for, instead, is that liberal Christianity recovers a religious reason for its own existence. As the liberal Protestant scholar Gary Dorrien has pointed out, the Christianity that animated causes such as the Social Gospel and the civil rights movement was much more dogmatic than present-day liberal faith. Its leaders had a “deep grounding in Bible study, family devotions, personal prayer and worship.” They argued for progressive reform in the context of “a personal transcendent God ... the divinity of Christ, the need of personal redemption and the importance of Christian missions.”

Today, by contrast, the leaders of the Episcopal Church and similar bodies often don’t seem to be offering anything you can’t already get from a purely secular liberalism. Which suggests that per haps they should pause, amid their frantic renovations, and consider not just what they would change about historic Christianity, but what they would defend and offer uncompromisingly to the world.

It appears that this plea will go unheard.  The tide has turned away from mainstream churches, and while many ‘Progressives’ want to keep their traditional blend of liberal religion and social activism; and while many old-line conservatives want to retain the status and position of their mainline denominations, the rest of America will turn to something safe, personal, dynamic, and profoundly spiritual.

Thursday, July 19, 2012

Winston Churchill–A True Hero

After decades of playing bad golf, a good friend, Charlie Pickford, said he had decided to take a professional lesson.  By this time he was well into his fifties and the chances of any real improvement were slim, but he always liked the game, perhaps more from the ‘good walk spoiled’ aspect – the lush fairway carpets, the manicured greens, the bright, well-groomed sand traps; the brooks, streams, and ponds, and the warm New England sunshine.

In any case, after only a few swings, the golf pro at the Fair Meadows Country Club diagnosed why my friend had such a murderous slice.
“Start to take your swing”, he said. “Then at the top of the arc, stop.”
My friend took his stance, adjusted his shoulders, feet and legs, brought the driver high up over his head, and stopped just before he would have swung down at the ball.
“Who’s your biggest hero?”, the pro asked.
“Winston Churchill”, Charlie replied without a second’s hesitation.
“Well, there he is behind you.  Look over your right shoulder”.
 

The exercise was to teach an important lesson in the dynamic of the golf swing – the positioning of the body, arm, shoulder, and head at a crucial point.  From there on the swing should take on a fluid and strong movement.

Charlie's Winston Churchill response was easy.  For years he  had admired this brilliant man – adventurer, soldier hero, national leader, visionary politician, and prolific and heralded writer.  Who else could have fought as a common soldier in three wars, led his country to victory over the Nazis, stirred the nation and the world with soaring oratory, foreseen the closing of the Iron Curtain and the demonic spread of Communism, and written one of the most comprehensive histories of Great Britain ever written? 

His other  hero was Sir Richard Francis Burton who, like Churchill was an adventurer, writer, soldier and a man of extraordinary courage.  Burton was a geographer, explorer, translator, writer, soldier, orientalist, cartographer, ethnologist, spy, linguist, poet, fencer and diplomat. He spoke 29 languages so well that with his uncanny observation of culture he could pass for an Arab, an Indian, or an Afghan. He successfully disguised himself as a Pashtun and his perfect accent, demeanor, and knowledge of local customs allowed him to be only the second European to gain access to the holiest of shrines in Mecca.
Burton's best-known achievements include travelling in disguise to Mecca, an unexpurgated translation of One Thousand and One Nights, bringing the Kama Sutra to publication in English, and journeying to visit the Great Lakes of Africa in search of the source of the Nile. He was a prolific and erudite author and wrote numerous books and scholarly articles about subjects including human behavior, travel, falconry, fencing, sexual practices and ethnography. A unique feature of his books is the copious footnotes and appendices containing remarkable observations and information.
He was a captain in the army of the East India Company, serving in India (and later, briefly, in the Crimean War). Following this, he was engaged by the Royal Geographical Society to explore the east coast of Africa and led an expedition guided by the locals and was the first European to see Lake Tanganyika. In later life, he served as British consul in Fernando Po, Santos, and finally, Trieste. (Wikipedia)


Charlie was in awe of men like Churchill and Burton, and he felt that the modern age had no one like them.  Churchill especially was a remarkable man.  Who can forget the image of Churchill, Charlie reminded me, top hat, cigar, wainscot, overcoat and bowtie walking through the ruins of a bombed out London?  "I can still hear the scratchy recording of Churchill’s famous WWII speech which did more than anything rally the British people against an evil invader", he said.
Even though large tracts of Europe and many old and famous States have fallen or may fall into the grip of the Gestapo and all the odious apparatus of Nazi rule, we shall not flag or fail. We shall go on to the end. We shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our island, whatever the cost may be.
We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, and if, which I do not for a moment believe, this island or a large part of it were subjugated and starving, then our Empire beyond the seas, armed and guarded by the British Fleet, would carry on the struggle, until, in God's good time, the New World, with all its power and might, steps forth to the rescue and the liberation of the old.
Of all of the qualities of Churchill (and Burton) that Charlie admired, bravery stood out:
Bravery was a constant throughout Churchill’s long, eventful life. D’Este notes that “long before he became a statesman,” he “was first a soldier.” The young Churchill, with his miserable childhood and miserable personality, chose military service as a way to make his name and prove himself worthy — especially to his cold and distant father. As a young man, he fought in India and was almost killed.
In 1898 he fought under Kitchener at Omdurman and barely escaped death again. Then he fought in the Boer War, where he was captured and escaped. In World War I he served as first lord of the Admiralty, but after the failure of his plan to force open the Dardanelles, which led to the death of thousands of British and Allied soldiers at Gallipoli, he had himself assigned to fight alongside such men in the bloody trenches of Flanders. (Robert Kagan in the New York Times, reviewing Carlos D’Este’s biography of Churchill 11.08)
It was this bravery in defense of this country, said D’Este, which helped form the patriotism and sense of duty during his service as Prime Minister in WWII:
He was a soldier, a “warlord,” a warrior-statesman in the mold of Frederick the Great, Napoleon, Oliver Cromwell or his great ancestor the Duke of Marlborough (Kagan)
Churchill used his understanding of history (his History of the English-Speaking Peoples was begun in 1937) and his battlefield experiences to correctly predict the course of modern events.  Kagan continues:
In political exile following World War I, he warned of the rise of dictatorships in Germany, Italy, Japan, Spain and the Soviet Union, so much so that his critics, who did not want to think anymore about great confrontations, called him a warmonger. When he denounced the agreement reached at Munich in 1938, he warned that there could “never be friendship between the British democracy and the Nazi power.”
Churchill understood that Hitler could never permit an independent Britain, which would always threaten Germany’s control of the Continent, and would use peace only to gather strength for a final assault.
Churchill was turned out of office in 1945 in an election which favored the Labor Party.  Despite his heroism, the British electorate felt that Labor would do a better job of rebuilding the country and effected needed political and economic reforms.  The glory of Empire was fading in the minds of the British, and they were turning inwards.

Nevertheless, the British were lucky to have such a great man as leader during perhaps the most crucial years of their history:
Like Lincoln, Churchill saw the importance of bolstering public morale, and he understood how deadly it was to talk of peace deals when the nation was losing. “We shall go on and we shall fight it out,” he declared. “And if at last the long story is to end, it were better it should end, not through surrender, but only when we are rolling senseless on the ground.” No one doubted him when he promised to die with pistol in hand fighting the Nazis in the streets of London.
These were the qualities that made Britons choose him over other men, and to follow him in a desperate struggle against the greatest odds. Margot Asquith, describing why people looked to him for leadership, observed that it was not his mind or judgment they respected. “It is, of course, his courage and color — his amazing mixture of industry and enterprise. . . . He never shirks, hedges or protects himself. . . . He takes huge risks. He is at his very best just now; when others are shriveled with grief — apprehensive . . . and self-conscious morally, Winston is intrepid, valorous, passionately keen and sympathetic.” He may have longed “to be in the trenches” and was “a born soldier,” but it was not as a soldier that the world needed him.
In a recent biography of Churchill, Peter Clarke focuses on his writing, and in a review of the book in the Times Literary Supplement Geoffrey Wheatcroft comments on Churchill’s extraordinary ‘prolificity and precocity’.  As Churchill’s daughter, Lady Soames once remarked “The thing you have to remember is, he was a journalist”.  Not only was he a journalist, but a war correspondent, and even before his career as a soldier, he was part of battle:
In 1895,before Churchill turned twenty-one, the 4th Hussars had tolerantly given him leave to go to Cuba where he witnessed the patriotic rebellion against Spain, and he paid for the trip with his first newspaper commission, from the Daily Graphic.
Two years later, Queen Victoria’s Diamond Jubilee in 1897 found him on leave again, in England. “On the lawns of Goodwood in lovely weather” he heard the riveting news that a “field force” was being raised under the improbably named General Sir Bindon Blood, to subdue the unruly Afghan tribesmen (a good deal of Churchill’s early life does have an eerily premonitory ring more than a hundred years later). He rushed back to India, having secured a contract from the Daily Telegraph, an adventure which provided enough copy for his first book, The Story of the Malakand Field Force.
By the time it was published in 1898, Churchill had wangled his way on to one more expeditionary force which Kitchener was leading up the Nile, switching in true freelance spirit to the Morning Post, and producing another book. The Sudan was followed by the Boer War, which he covered for the Morning Post at the age of twenty-four.

In later years he wrote a biography of his father, Lord Randolph Churchill, a history of WWI, The World Crisis, a biography of the Duke of Marlborough, and his above-mentioned History of the English-Speaking Peoples.

Churchill’s journalism and books were lucrative and the fees and royalties financed his political career.  It is historical writing which has received the most criticism in recent times – not objective enough say modern historians:
“Give me the facts, Ashley, and I will twist them the way I want to suit my argument”, he had once said, doubtless meaning to shock, and he later said of The Second World War that “It is not history, it is my case”. So were all his books: his case for his father, and for Marlborough and for himself. Clarke quotes the damning verdict of Robin Prior in Churchill’s “World Crisis” as History: “The reader is never sure that the version given by Churchill is complete, or if material damaging to the case Churchill is building up has been omitted, or if any deletions made have been indicated in the text”
Yet Churchill himself was quite aware that he was writing more than just history but an apologia for Great Britain.  It is not surprising that he wrote in this way.  His passionate patriotism, belief in country and empire, and full understanding and respect for the power, sweep and importance of Britain since the days of the Romans could never be disguised or hidden.  His histories were as eloquent as his wartime speeches.  If they omitted the facts that a modern historian would have kept in for the sake of objectivity, they were elegiac works, resoundingly reaffirming Britain’s greatness, and for that they will be remembered. 
For Churchill, the past was never a foreign country, and they did not do things differently there: King Alfred, Marlborough, Chatham and Washington were his contemporaries, who surely thought and felt as he did.
This subjectivity is anathema to the modern historian who often ignorantly carp at Shakespeare for distorting the facts.  Churchill was more concerned with actual historical events than Shakespeare and his work is definitely more fact than drama or tragedy; and yet at the same time he was as concerned as the playwright with the human strengths, weakness, and foibles (especially strengths) which underlie history.  Of course he understood King Alfred, Marlborough and the rest – they had to be like him, and they most certainly were patriotic, brave, if not heroic.
“Rarely can an author’s writings have received less attention than those of the winner of the Nobel Prize for Literature in 1953”, Clarke observes at the outset. It is idle to ask whether Churchill deserved the prize: like the outrageous praise showered on him at the time, especially in the American press, and with none more fulsome than Isaiah Berlin, it was the mood of the age. Today we can view those writings with more detachment. Churchill was a born storyteller, and an unashamed exponent of what Giovanni Giolitti called “beautiful national legends”; at one extraordinary moment in history, his faith in such legends helped save civilization.
In recent years, the Politically Correct Establishment has branded Churchill a pariah because of this long, persistent, and immutable defense of Empire.  Whether it was the Roman, British, or Spanish Empire, it brought Western Civilization to the undeveloped parts of the world.  Churchill never flinched from this conviction, believing that the legacy of the Greeks carried on throughout the history of Europe and later America was worth preserving.  If the British benefited from the wealth and treasures of India, they gave India Anglo-Saxon law and administration.  They left India with the foundations for a modern society.  He was reluctant to let go to this colonial tradition because he believed that the countries demanding independence were not ready, and that Britain could still provide the investment and guidance necessary for their eventual evolution into free states.  Many critics look at Africa and say that the colonial abandonment was indeed premature, and that a more gradual, determined, and progressive turnover might have been more appropriate. 



Rejecting Churchill out of hand because of this defense of empire is revisionist history at its worst.  He grew up in the Age of Empire and within a 19th century moral and ethical world.  His absolute belief in the destiny of Britain which helped motivate the country to resist Hitler’s onslaughts and to ultimately defeat him was the same belief that was behind his defense of empire.  His extraordinary perspicacity failed him at the moment of independence – he could not see that the colonial era’s time had indeed come – but that should not be held against him.  For most of his long life, his solid, honest, and committed beliefs were important for Britain, for Europe, and for the world.  He will always be Charlie's and my hero.