Abolition, Brazil, and the Last Confederates

ConfederadosPicture thousands of country folk down south, gathered for fried chicken, barbecue, and beer. "I Wish I Was in Dixie" plays on infinite loop. The stars and bars of the Confederate battle flag are everywhere, including belt buckles and trucker caps. Many of the partiers are descended from Confederate soldiers. Not all are white, and very few speak English.

It is the 150th anniversary of the end of the American Civil War, and the unlikely locale of this unusual gathering is Santa Barbara d’Oeste, a rural Brazilian town colonized by Southern families fleeing the (re-)United States.

"For many of the residents," according to the Associated Press’s Jenny Barchfield, "having Confederate ancestry is a point of pride that’s celebrated in high style at the annual ‘Festa dos Confederados.’"

Most of the original Confederate immigrants "were lured by newspaper ads placed in the wake of the war by the government of Brazil’s then-emperor, Dom Pedro II, promising land grants to those who would help colonize the South American country’s vast and little-explored interior."

The emperor wanted agriculturally skilled colonists, and the former Johnny Rebs wanted to escape the rule of the Yankee carpetbaggers.

It may be tempting to conclude that another attractive feature of their new home was that the institution of slavery survived in Brazil. In fact, holding human beings as property was legal there until Dom Pedro’s daughter, Dona Isabel, acting as regent in her father’s absence, signed the Lei Áurea into law on this day, May 13, 1888, formally abolishing slavery in Brazil. And since Brazil’s was the last government in either North or South America to recognize the legality of slavery, Princess Isabel’s actions marked the end of legal slavery in the Americas.

But it’s unlikely that the legal status of slavery affected the Confederados’ decision to leave North America and start a new life south of the equator. These migrants were not from the aristocracy of plantation owners and slaveholders. They were mostly the working-class farmers who had had to compete with slave labor in the antebellum economy, and the system of slavery worked to their disadvantage.

About half the Confederados found life too hard and too foreign in the land the emperor granted them, and they eventually returned home. The rest assimilated into Brazilian society. At the time, over 40 percent of Brazil’s 10 million people were of African heritage, and about a third of them were enslaved. After abolition, being multiracial eventually became the norm. Today, no country outside of Africa has a larger population of African descendants.

Barchfield writes, "Mixed-race guests at Sunday’s party seemed unruffled by the omnipresent Confederate flag."

"To me it’s a positive symbol of my heritage," said Keila Padovese Armelin, a 40-year-old mother of two who describes herself as a "racial milkshake." "For us, it doesn’t have a negative connotation at all."

Of course, African-descended Brazilians have the luxury of viewing the flag as both historical and exotic. In the United States, it is still either a symbol of ongoing resistance or of ongoing intolerance and oppression.

Everything about the racial history of Brazil is different from the US experience. And if abolition came later to the South American country, racial harmony — or at least the blurring of the racial categories — seems to have developed considerably sooner.

The process was already underway before Dom Pedro invited the defeated Southerners to his shores. Historian James McMurtry Longo writes,

In his first and in all subsequent cabinets and government appointments, Pedro II selected Brazilians for leadership positions regardless of race. Isabel … grew up seeing men of all races serving [her] father in positions of authority.

As her father’s student, daughter, and heir, Princess Isabel followed his example. Race never played a role in her social life, political relationships, alliances or disagreements. It may have been the most important lesson Isabel learned from him.

According to economist Edward Glaeser, in his book Triumph of the City, "Emperor Pedro II disliked slavery, but fear of a political backlash may have kept him from trying to emancipate the rest of the country."

So it was while he was abroad that his daughter Isabel signed her country’s emancipation proclamation. Abolition was a popular cause in Brazil, and her subjects acclaimed Isabel as "the Redemptress" (A Redentora). Pope Leo XIII conferred on her the Golden Rose for her role in eradicating slavery from its last bastion in the Americas.

But her father "had been right to fear a backlash," writes Glaeser. "In the next year, a military coup, backed by oligarchs outraged by losing their human chattels," overthrew the monarchy.

Isabel wrote, on the day after the coup d’état that deposed her family, "If abolition is the cause for this, I don’t regret it; I consider it worth losing the throne for."

The former slavers were now in charge, but abolition proved to be irreversible, and over time, Brazilians began to integrate.

Half a century later, author Stefan Zweig — Ludwig von Mises’s Viennese contemporary — saw Isabel’s lost empire as a model for the rest of the world.

The "central problem that forces itself on each generation, and more than ever on ours," he wrote in 1941, "is the answer to the simplest and still most important question, namely: what can we do to make it possible for human beings to live peacefully together, despite all the differences of race, class, colour, religion, and creed?"

In Brazil: Land of the Future, Zweig wrote,

On the basis of its own ethnological structure, Brazil — had it adopted the European mania of nationality and race — would have become the most strife-torn, most disintegrated country on earth.…

But to one’s great surprise one soon realizes that all these different races visibly distinct by their colour alone live in fullest harmony with one another. And in spite of their different backgrounds they compete only in trying to discard their original peculiarities in order to become Brazilians as quickly as possible.

Zweig approvingly called Brazil’s social strategy "the principle of a free and unsuppressed miscegenation." In the United States, we call it the melting pot.

But if Zweig is right that Brazil’s model for the rest of the world is one of tolerance and general liberality, he may still be wrong about the flight from "original peculiarities." The racially mixed descendants of Confederate soldiers don’t seem to be discarding anything. They proudly commingle a diversity of backgrounds that many in the north would perceive as irreconcilable.

Today we can celebrate the anniversary of the demise of chattel slavery in our hemisphere. We also celebrate the divergent histories of the United States and Brazil — how differently freedom was achieved, and how amicably the descendants of Africa and Europe, by way of the American South and South America, can find common cause in beer and barbecue.


This article originally appeared on FEE.org’s Anything Peaceful on May 13, 2015.

Does Michael Moore support the 2nd Amendment?

bowlingforcolumbine-michaelmoore2In the wake of the Baltimore riots and the latest charges of police violence against unarmed suspects, Oscar-winning filmmaker Michael Moore has called for disarming American cops, saying in his Twitter feed, “We have a 1/4 billion 2nd amendment guns in our homes 4 protection. We’ll survive til the right cops r hired.”

Is that an implicit endorsement of private individuals’ right to armed self-defense?

Probably not.

Moore, who became the darling of the gun-control movement in 2002 for the movie Bowling for Columbine, is an outspoken critic of the 2nd Amendment, saying that the Founders themselves would have excluded gun rights from the Constitution if they had known what firearms would become over the next two centuries:

If the Founding Fathers could have looked into a crystal ball and seen AK-47s and Glock semiautomatic pistols … I think they’d want to leave a little note behind and probably tell us, you know, that’s not really what we mean when we say “bear arms.”

It’s tempting, therefore, to dismiss Moore’s April 30th tweets as conscious hyperbole — perhaps confronting law-and-order types with the logic of their own support for gun ownership.

But if you look at the full set of Moore’s tweets on the subject, a consistent libertarian logic is evident:

  1. Government agents currently do more to endanger private citizens than they do to protect us.

  2. That oppression can only continue while the government holds a monopoly on armed violence.

  3. We need to shift the balance of power away from the state and back to the people.

Is that too much to read into one angry Twitter rant?

If Moore’s goal was to outrage the American public, he has certainly succeeded. Pro-police conservatives are jerking their knees at the far-left filmmaker’s provocations. But advocates of liberty can find at least a sliver of common cause with those who see the visible fist of government power in Baltimore and too many other American cities in recent months.

Many libertarians consider the police to be among the few legitimate roles for a night-watchman government; defense and security are necessary to protect the rights of individuals. But there is no question that the government’s most heavily armed agencies have grown well beyond the role of night watchmen, if that was ever really their function. And then there is the proliferation of armed agents to organizations like the Fisheries Office, NASA, the EPA, and the Department of Education.

As the sharing economy chips away at other cartels in our over-regulated economy, we need to accept that the police, too, need competition — and we have the opportunity right now to ally with many on the American left who are beginning to suspect the same thing.

When government agents hold a monopoly on the tools of violence, is it any wonder when they behave like a cartel? Privately owned firearms are part of the decentralized solution to both looting and the police violence that triggers the protests.

By allowing individuals to defend themselves, their homes, their businesses, and their communities from crime and rioting, they need not rely exclusively on police forces that may be ineffective or corrupt. (The famous defense of Koreatown by armed shop owners during the LA riots shows this principle at work.)

If you don’t recognize the right to armed self-defense in principle, you are either dogmatically opposed to private guns, or you think the question is pragmatic and that there is a calculus of trade-offs: which is more dangerous at the moment, armed citizens or a police monopoly?

Michael-Moore-and-the-2nd-AmendmentThere isn’t much say to dogmatists on the matter. But the question of practical trade-offs may resonate with those on the left who currently see the police less as protectors and more as a danger.

Would such an alliance evaporate as soon as our allies perceive themselves to be in power again? Probably. Moore doesn’t see the problem as permanent: “We’ll survive til the right cops r hired.”

But we have the opportunity right now to drive home the point that the government needs more than checks and balances within itself. The people must have the ability to defend themselves independently of the state, and that’s harder to do when the government has all the guns.


This article originally ran on FEE.org’s Anything Peaceful.

when evil institutions do good things: the FCC’s PTAR law

StreetTVIn my Freeman article "TV’s Third Golden Age," the summary subtitle that the magazine chose was "Programming quality is inversely proportional to regulatory meddling." I couldn’t have said it better. But does that mean that everything the FCC does makes television worse?

All laws and regulations have unforeseen consequences. That usually means unintended damage, but there’s no law of history that says every unplanned outcome is pernicious.

If you’re an advocate of a free society — one in which all arrangements are voluntary and there is the least coercive interference from governments or other thugs — history will present you with an unending series of conundrums. Whom do you side with in the Protestant Reformation, for example? The Catholic Church banned books and tortured scholars, and their official structure is one of hierarchy and authority. Easy enemy, right? Clear-cut bad guy. But the Church had kept the State in check for centuries — and vice versa, permitting seeds of freedom to root and flourish in the gaps between power centers. Whereas the Protestant states tended to be more authoritarian than the Catholic ones, with Luther and Calvin (not to mention the Anglicans) advocating orthodoxy through force. There’s a reason all those Northern princes embraced the Reformation: they wanted a cozier partnership of church and state.

This is certainly not the history I was taught in my Protestant private schools.

Similarly, most of us were schooled to side with the Union in the Civil War, to see Lincoln as a savior and the Confederacy as pure evil. But as much as the war may have resulted, however accidentally, in emancipating slaves, it also obliterated civil liberties, centralized power, strengthened central banking and fiat currencies and — to borrow from Jeffrey Rogers Hummel’s great book title — enslaved free men.

"Father Abraham," as the pietists called him after his assassination, was a tyrant whose primary goal was always what he actually achieved: central power over an involuntary union. Recasting this guy as an abolitionist hero is one of the many perverse legacies of America’s official history. But it’s a mistake to simply reverse the Establishment’s verdict and claim that the Confederacy was heroic. Plenty of Johnny Rebs were fighting a righteous battle against what they rightly deemed to be foreign invaders, but even if you ignore the little problem of the South’s "peculiar institution," the Confederate government was no more liberal than its Northern rival. "While the Civil War saw the triumph in the North of Republican neo-mercantilism,” writes Hummel, “it saw the emergence in the South of full-blown State socialism.”

Reading history without taking sides may fit some scholarly ideal (actually, it seems to be a journalistic ideal created by the Progressive Movement to masquerade their views as the only unbiased ones), but it is not a realistic option. We cannot do value-free history. If we try, we instead hide or repress our biases, which makes them a greater threat to intellectual integrity.

Neither can we say, "a plague on both their houses," and retreat to the realm of pure theory, libertarian or otherwise. We have to live in the real world, and even if we are not activists or revolutionaries, the same intellectual integrity that must reject "neutrality" also requires that we occasionally explore the question of second-best or least-evil options.

I remember several years ago, when my very libertarian boss surprised me by speaking in favor of increased regulation of banking. His point was that the banks were not free-market institutions; they were government-created cartels enjoying a political privilege that protected them from the consequences of the market while they surreptitiously depleted our property and spoiled the price system that drives all progress in the material world. Ideally, he’d want the government out of banking altogether, but in the meantime having them do less damage was better than letting them do more.

It may seem anticlimactic to follow the Reformation, Civil War, and fractional-reserve banking with a little-known FCC rule about TV programming from almost half a century ago, but I’ve been reading television history for a while now (1, 2, 3, 4) as illustrative of larger patterns in political history.

The Prime Time Access Rule (PTAR) was a law instituted in 1970 to limit the amount of network programming allowed during TV’s most-watched evening hours.

According to industry analyst Les Brown, the PTAR was adopted

to break the network monopoly over prime time, to open a new market for independent producers who complained of being at the mercy of three customers, to stimulate the creation of new program forms, and to give the stations the opportunity to do their most significant local programming in the choicest viewing hours. (Les Brown’s Encyclopedia of Television)

If you still accept the official myth that the airwaves are "That most public of possessions given into the trust of the networks," as Harlan Ellison describes them in The Glass Teat, and that the federal government’s job is to manage the radio spectrum in the best interests of that public, then I’m sure you don’t see any problem with PTAR. (You can read my paper "Radio Free Rothbard" [HTML, PDFDownload PDF] for a debunking of this official piety.)

But a libertarian could easily jerk his or her knee in the opposite direction. How dare the central government tell private station owners what they can and can’t air on their own stations, right?

The problem with such an ahistorical take on the issue is that broadcast television was a creature of the state from the beginning. Radio may have had a nascent free-market stage in its development, but television was a state-managed cartel from the word go.

So am I saying that PTAR was a good thing? Is it like the possibly beneficial banking regulations imposed on a cartelized banking system? Should we view CBS versus FCC as the same sort of balance-of-power game that Church and State played before the early modern period of European history?

Maybe, but that’s not why I find PTAR an interesting case for the liberty-minded historian. As is so often the case with laws and regulations, PTAR’s main legacy is in its unintended consequences.

"Despite the best of intentions," writes historian Gary Edgerton in The Columbia History of American Television, "the PTAR failed in almost every respect when it was implemented in the fall of 1971."

[P]ractically no local productions or any programming innovations whatsoever were inspired by the PTAR. In addition, any increase in independently produced programming was mainly restricted to the reworking of previously canceled network series, such as Edward Gaylord’s Hee Haw and Lawrence Welk’s The Lawrence Welk Show.… Rather than locally produced programming, these kinds of first-run syndicated shows dominated the 7 to 8 P.M. time slot.

This renaissance of recently purged rural programming was certainly not the FCC’s goal, but the creation of the first-run-syndication model is one of the great unsung events in media history.

A quick note on terminology: to the extent that I knew the word "syndication" at all when I was growing up, I took it to be a fancy way of saying "reruns." For example, Paramount, the studio that bought the rights to Star Trek after the series was cancelled, sold the right to rerun the program directly to individual TV stations. When a local TV station buys a program directly from the studio instead of through the network system, that’s called syndication. But syndication isn’t limited to reruns. Studios created first-run TV programs for direct sale to local stations as far back as the 1950s, but they were the exception. The dominant syndication model was and is reruns. But two events created a surge of first-run syndication: (1) PTAR, and (2) the rural purge I obliquely alluded to above.

I write about the rural purge here, but I’ll summarize: as the 1960s turned into the 1970s, television network executives did an about-face on their entire approach to programming. In the 1960s, each network tried to win the largest possible viewership by avoiding controversy and appealing to the lowest common denominator in public tastes. This meant ignoring the rift between races, between generations, and between urban and rural sensibilities — what we now call red-state and blue-state values — in the ongoing culture wars. This approach was dubbed LOP (Least Objectionable Program) theory.

Basically, this theory posits that viewers watch TV no matter what, usually choosing the least objectionable show available to them. Furthermore, it assumes a limited number of programming choices for audiences to pick from and implies that networks, advertising agencies, and sponsors care little about quality when producing and distributing shows. (Gary Edgerton, The Columbia History of American Television)

By the end of the decade, however, NBC vice president Paul Klein (who had christened LOP theory just as its tenure was coming to an end), convinced advertisers that they should stop caring so much about total viewership and focus instead on demographics, specifically the Baby Boomers — young, politically radicalized, and increasingly urban TV viewers — who were most likely to spend the most money on the most products. CBS was winning the battle for ratings, but Klein pointed out that their audience was made up of old folks and hicks, whereas NBC was capturing the viewership of the up-and-comers.

Klein may have worked for NBC, but it was CBS who took his message to heart, quite dramatically. In 1970, the network rocked the TV world by cancelling its most reliably popular shows: Petticoat Junction, Green Acres, The Beverly Hillbillies, Mayberry RFD, Hee Haw, Lassie, and The Lawrence Welk Show.

In Television’s Second Gold Age, communications professor Robert J. Thompson writes,

CBS, in an effort to appeal to a younger audience made socially conscious by the turbulent 1960s, had dumped its hit rural comedies in the first years of the 1970s while their aging audiences were still placing them in Nielsen’s top twenty-five. Critics, who for the most part had loathed the likes of Petticoat Junction and Gomer Pyle, loved some of what replaced them.

I loved what replaced them, too: Mary Tyler Moore, All in the Family, M*A*S*H, and the like. "Several members of Congress," Wikipedia informs us, "expressed displeasure at some of the replacement shows, many of which … were not particularly family-friendly." But that was the point: the networks were no longer aiming to please the whole family: just the most reliable consumers.

But despite capitalism’s cartoonish reputation for catering only to the bloated hump of the bell curve, that’s not how the market really works. It is how a cartel works, and the broadcast networks behaved accordingly, both before and after the rural purge. In the 1950s and ’60s, they aimed for the largest possible viewership and to hell with minorities of any sort. The demographic revolution changed the target, but not the tactic: aim for the big soft mass. That’s certainly how the big players would behave in a free market, too, but the telltale sign of freedom in the economy is that the big players aren’t the only players. Fortunes are made in niche markets, too, so long as there aren’t barriers to entering those niches. As I’ve said, TV is descended from radio, and Hoover and his corporatist cronies had arranged it so that there could only be a few big players.

That’s where we come back to the FCC’s Prime Time Access Rule of 1970. PTAR created a hole at the fringe of the prime-time schedule, just as the rural purge was creating a hole in the market. All those fans of Hee Haw and Lawrence Welk didn’t just go away, and they didn’t stop spending their money on advertised products, either. Before PTAR, the multitude of fans of "rural" programming would have had to settle for mid-afternoon reruns of their favorite shows (the way Star Trek fans haunted its late-night reruns around this same time). But the rural fans didn’t have to settle for reruns, and they didn’t have to settle for mid afternoons or late nights. They could watch new episodes of Hee Haw or Lawrence Welk at 7 PM. In fact, those two shows continued to produce new episodes and the local stations, which were no longer allowed to buy from the networks for the early evening hours, bought first-run syndicated shows instead. The Lawrence Welk Show, which had started in the early 1950s, continued for another decade, until Welk retired in the early ’80s. And the repeats continue to run on PBS today. Hee Haw, believe it or not, continued to produce original shows for syndication until 1992.

I loved Mary Tyler Moore, and I didn’t care so much for Lawrence Welk, but what I really love is peaceful diversity, which cannot exist in a winner-takes-all competition. The rise of first-run syndication was a profound crack in the winner-takes-all edifice of network programming.

The strategy CBS, NBC, and ABC had gravitated toward for short-term success — namely, targeting specific demographics with their programming — also sowed the seeds of change where the TV industry as a whole would eventually move well beyond its mass market model. Over the next decade, a whole host of technological, industrial, and programming innovations would usher in an era predicated on an entirely new niche-market philosophy that essentially turned the vast majority of broadcasters into narrowcasters. (Gary Edgerton, The Columbia History of American Television)

This idea of "narrowcasting" is the basis of quality in entertainment (and freedom in political economy, but that’s another story).

I’m not out to sing the praises of the FCC for increasing economic competition and cultural diversity — these consequences were entirely unintended — but we do have to recognize PTAR as a pebble in Goliath’s sandle, distracting him for a moment from David’s sling.

historical irony: the Economist magazine prefers Great Britain to Little England

TheEconomistCover20131109I blogged the other day about the double meaning of the term "Little Englander" and how its two meanings are really at odds with each other:

See Wikipedia and Wiktionary for example, where the primary definition is anti-imperialist, followed by the "colloquial" usage that means xenophobic. ("An Idiot’s Guide to Little Englanders")

One recent article from the Economist seems to use the term in both ways simultaneously ("Great Britain or Little England?").

Because the magazine does not give the author’s name, I assume the piece is meant to represent the editorial position of the Economist itself, opposing drastic budget cuts while recognizing a general need for the British state to shrink and the market to grow. Who, then, are the Little Englanders according to the Economist? Euroasceptics and anti-immigrationists.

"Britain is on the way to becoming more solvent but also more insular," the Economist frets. "The trick for Britain in the future will be to combine a smaller, more efficient state with a more open attitude to the rest of the world."

Apparently, a "more open attitude" would take the form not of voluntary exchange between free individuals across international borders but rather of precisely the sort of governmental intervention that George Washington disparaged as "foreign entanglement."

One great irony is that the Economist is itself a descendent of the original Little Englanders. The magazine traces its lineage back to the Anti–Corn Law League, the early free-trade manifestation of the Manchester School.

The classical-liberal Manchester School is remembered most for its opposition to protectionism, which was rightly perceived in the 19th century as a way to tax the poor to benefit the landed aristocracy. The Economist has not remained a liberal publication in this historically libertarian sense, but it has generally honored its free-trade roots. Has it lost track of the other side of the Manchester coin — opposition to war, imperialism, and foreign entanglements?

An Idiot’s Guide to Little Englanders

An Idiot AbroadI keep learning about movies and TV shows long after they’re past current — when the Netflix app on my iPad suddenly puts them in front of me. So I’ve just watched the first episode of An Idiot Abroad, the latest attempt by Ricky Gervais and Stephen Merchant, the UK creators of the BBC’s The Office, to find humor in humiliating and ridiculing their friend Karl Pilkington — this time by sending him around the world to "experience" other cultures.

Merchant is clearly the better-educated half of the duo. "I’ve been to many exotic places," he says in the show’s opening. "I genuinely believe that travel broadens the mind."

Whether or not he’s sincere in that conviction, Gervais’s candor better represents the feeling of the series: "I want him to hate every minute of it."

Why? "Nothing is funnier than Karl in a corner, being poked by a stick," Gervais explains, adding, "I am that stick."

So why is Pilkington their victim of choice for this ongoing series of orchestrated culture shocks?

"He is a round, empty-headed, chimp-like manque moron, buffoon idiot. And he’s a friend," Gervais says.

But Stephen Merchant’s less blunt explanation is what caught me off guard and sent me to Google and Wikipedia to research current British terminology:

"He is a typical Little Englander and he doesn’t like going out of his comfort zone."

I could judge from context what he meant, but I had never heard the term Little Englander used that way. If you’re an American, the chances are you’ve never heard it used at all. I knew it from the history of classical liberalism, where the British war party of the 19th century used it as a smear against the anti-imperialists of the Manchester School. The British hawks called the anti-interventionist opponents of the British Empire "Little Englanders" to distinguish them, I assume, from the true patriots of Great Britain.

It wasn’t Britain the Little Englanders opposed, of course; it was empire.

The 20th-century equivalent smear, used both in the United Kingdom and the United States is "isolationist" — implying that the opponents of an expansive interventionist foreign policy are trying to shut out the rest of the world, bury our heads in the sand, and attempt to wish away the impositions of an ever more global culture. In other words, we are narrow-minded, myopic, and reflexively against everything foreign. By implication, it is the interventionists who are cosmopolitan and internationalist.

Here is Gregory Bresiger’s description of the Manchester School, from his JLS article "Laissez Faire and Little Englanderism":

The Manchester School [was] a radical group of parliamentary members in Victorian England. They were also known as the Little Englanders, or the Peace Men. Generally, they weren’t pacifists, but they proclaimed themselves as followers of Adam Smith, who saw peace, a reduction in government expenditures, and free trade as vital characteristics of prosperous, free societies. They fought the same battles as Taft and those consistent friends of liberty who today call for the dismantling of the American imperial state both at home and abroad.

Manchesterism, like libertarianism today, was a philosophy ridiculed by nationalists and jingoists in Victorian England, who called it hopelessly utopian and isolationist.Download PDF

Is that what Stephen Merchant is accusing Karl Pilkington of? No, of course not. Merchant means that Pilkington doesn’t like Chinese food and thinks that foreign cultures have taken normal things from the English and made them weirder.

The result is a perverse travel show that is both very funny and oddly informative. Merchant’s use of the "Little Englander" epithet is a tiny, throwaway line, not at all the emphasis of the show — although it does get repeated in every episode of the first season, since it’s part of the opening.

So why do I find it significant? Isn’t this just another example of how language changes over time with shifts in political and historical context?

(I argue against this general line of thought — using a different example — in my most recent Freeman article, "Check Your History.")

If "Little Englander" were just a case of shifts in meaning, we should expect the more political and historically minded definition to have passed out of current usage, replaced by the insulting cultural definition.

But a quick Internet search suggests that while both meanings are current, the political meaning is still primary. See Wikipedia and Wiktionary for example, where the primary definition is anti-imperialist, followed by the "colloquial" usage that means xenophobic.

So why do the British still conflate opposition to empire with opposition to foreigners?

Is it the same reason Americans insist on the same conflation when talking about "isolationism"?

My latest Freeman article: “Check Your History”

FreemanCheckYourHistory

Feature

Check Your History

MARCH 11, 2014

Those who use the word “privilege” as a bludgeon don’t understand the word’s history any better than they do the complexity of power dynamics. [FULL ARTICLE]

Yes, We Have No Bananas

YesWeHaveNoBananasIn a recent post ("Is mediocrity intelligent?"), I talked about the importance of a diversity of strategies — even apparently "wrong" ones — to the long-term survival of a species. The corollary of course is that overinvestment in any single strategy can be catastrophic.

We see this issue at play in modern agribusiness.

As Popular Science informs us,

The 1923 musical hit "Yes! We Have No Bananas" is said to have been written after songwriters Frank Silver and Irving Cohn were denied in an attempt to purchase their favorite fruit by a syntactically colorful, out-of-stock neighborhood grocer.

It seems that an early infestation of Panama disease was already causing shortages in 1923. But the out-of-stock bananas in question were not the Cavendish variety we all eat today; they were Gros Michel ("Big Mike") bananas, and they were all that American banana lovers ate until the 1950s, when the disease finally finished them off.

I would love to know what a Gros Michel banana tastes like. I’m a big fan of bananas and eat them every day. (Actually, I drink them, blended into smoothies.) But the reason I only know the taste of Cavendish — and the reason you do too, unless you’re old enough to have had some Gros Michel mixed into your pablum — is that Cavendish bananas are resistant to the strain of disease that wiped out our original bananas. We have to assume that the Plan B bananas we now enjoy are only second best as far as flavor goes. They may not even be first best at survival, because the banana industry is searching for a Plan C banana to take the place of the Cavendish once the inevitable crop disease sends it the way of the Gros Michel — something that they predict will happen in the next decade or two. (See Banana: The Fate of the Fruit That Changed the World by Dan Koeppel.)

Why are bananas so vulnerable to these blights? Why aren’t agricultural scientists worried about our other favorite fruits — apples, for example?

Because there are many different types of apples. I’m dizzied by the variety at our local produce warehouse.

But not only is there just the one type of banana at the green grocers and in supermarkets; each banana you’ve probably every eaten is a clone of every other banana you’ve eaten. One genetic pattern manifested billions of times over, across millions of households in the past half century. And each Gros Michel was a clone of every other one, too. That’s because bananas reproduce asexually (as do potatoes, another food that’s especially vulnerable to disease — remember the Irish potato famine?).

Cavendish DNA is different enough from Gros Michel DNA that the disease that targeted the one species was no threat to the other. But any infection that can kill one Cavendish plant can wipe out the worldwide supply.

There are many reasons food activists attack Big Agribusiness — some good, some bad, and some wacky. One criticism that seems eminently reasonable to me is a concern that Big Agra puts all its billions of eggs in one giant basket.

Once upon a time, genetic diversity in farm products was built into how farming took place. Farmers farmed local land with local genetic strains of plants and animals. Chickens may have come from Asia, and Europe never saw a tomato until the Spanish brought some back from the New World, but even as trade began to go global several centuries ago, the limits of transportation and technology meant that gene pools could be local and diverse in a way that is much harder in our era of global overnight shipping and transnational corporate bureaucracies.

If an infestation wipes out the Golden Delicious, I can eat Fugi apples instead. But if the Cavendish disappears tomorrow, there isn’t yet a different banana to take its place.

CalvinPlanB

Do you remember in my earlier post when my professor presented to the "artificial life" department at Bell Labs? In the context of a communications-research lab, artificial life was about using the lessons of biology, ecology, and evolution to make telephone networks more robust.

You may think that agriculture is more "natural" than phone switches and fiberoptics, but farming often short-circuits nature’s mechanisms to suit our short-term goals. One of the main such strategies of nature is diversity. And as I tried to illustrate with the concept of the genetic deme and the relativity of fitness, diversity means that what looks like an inferior strategy today could turn out to be the salvation of the species tomorrow.

As Larry Reed wrote recently in the Freeman,

Statists those who prefer force-based political action over spontaneous, peaceful, and voluntary initiatives — excel at distilling their views into slogans. ("A Slogan Worth Your Bumper?")

But what I find revealing is the contradictions at play in the juxtaposition of different bumper stickers on the same car. (And when you see a whole bunch of bumper stickers on the same car, odds are you’re driving behind a left-wing statist.)

CelebrateDiversity

This past weekend, at a red light, I was behind a minivan that brandished three bumper stickers:

One said, "Women for Obama."

If that wasn’t enough to declare the driver’s politics, the next bumper sticker made the claim that strong public schools create strong communities.

The last bumper sticker advised us in rainbow colors to "Celebrate Diversity!"

(Pop quiz: Are bumper stickers #2 and #3 in accord or at odds?)

Now, it’s a standard complaint against leftists that they talk diversity while pushing ideological conformity. Political correctness, and all that.

But to me the greater irony is that the Left consistently pushes centralization. Eat local, buy local, but decide everything in Washington DC.

I know that there are left-wing decentralists, and perhaps they genuinely do see the important parallels between genetic diversity and political federalism, between local communities and local authority. But I keep thinking of a story Tom Woods tells of his attending a decentralist conference back in the 1990s, where he happily discovered like-minded activists from both Left and Right. But to the apparent delight of the left-wing so-called decentralists, the highlight of the event was the keynote speaker: Vice President Al Gore.

BananaBookNo, in my experience, the vast majority of people with Buy Local bumper stickers, as with the Celebrate Diversity crowd, are also often, e.g., Women for Obama — that is to say, champions of ever-more-centralized authority. I’m confident that the driver in front of me at the intersection saw no irony in celebrating diversity while advocating strong public schools — and an even stronger central government.

But in the biosphere, where diversity rules, order is spontaneous. That spontaneous order is both the cause of and the result from overwhelming diversity. There are no central strategies in evolution, only in the human world, and only in recent human history. Evolution gave the natural world hundreds of varieties of banana. The United Fruit Company (hardly a free-market firm, by the way) gave us only one.

[Cross-posted at LibertarianStandard.com]

Batman vs James Bond

BatmanVsJamesBondI just read over my old blog post on the economics of Batman and James Bond to refresh my memory. My wife and I have recently caught up on the Daniel Craig trilogy of 007 movies, and my seven-and-a-half-year-old son Benjamin and I have been watching a bunch of the more recent animated superhero shows from the DC universe, so my thoughts have been full of action heroes — particularly the Dark Knight and Her Majesty’s secret servant — for the past few weeks.

I remember my father complaining about both characters and contrasting them to the lone-hero tradition of hardboiled detectives and their fictional forebears, the cowboys.

In fact, my father’s point to my preteen self was a continuation of a point he made to me when I was about Benjamin’s age. I’d just gotten a set of “Undercover Agent” accessories for my GI Joe doll (we didn’t call them action figures back then). Gone were the camouflage fatigues and assault rifle; now Joe sported a dark trench coat and a walkie-talkie.

GIJoeUndercoverAgentI said, "Look dad: It’s GI Private Eye!"

My father then explained to me that my rhyming name for my new hero was self-contradictory. A GI was an American soldier, an official agent of the US government, whereas a "private eye" was a private individual, a lone hero in the fictional tradition. If dad had been more of a libertarian, he would have said that the military agent is paid by coercively extracted taxes and operates by state privilege, whereas the private detective is an agent of the market, authorized only by private contracts, and liable to the same restrictions as any individual citizen. My father doesn’t talk that way, even now, but he would acknowledge that description as making the same point.

So after GI Private Eye, I grew up with an awareness of the distinction between heroes like James Bond, who was funded and sanctioned by the government, and heroes like Philip Marlowe, who was funded by private clients and sanctioned only by his personal code of conduct. (And such detective stories often turned on the question of what limits that typically unspoken code imposed on the hero.)

Now, a few years later, my father was making a different but related point about James Bond, this time inspired by my love of another toy: my Corgi Astin-Martin DB5, James Bond’s super spy car from the movie Goldfinger. "Look dad, isn’t this car cool?"

1964_Corgi_Aston_Martin_DB5Ever philosophical, my father saw the car as symbolic, not only of that state-agent/private-individual divide he’d addressed a few years earlier with my GI Joe, but also of a divide in heroic literature. James Bond worked for the queen, he explained, in Her Majesty’s Secret Service. He was a knight for the monarch, and this tricked out vehicle from MI6’s Q Branch was the 1960s adventure-fantasy equivalent of the nobleman’s armor and mount.

I believe he felt the same about the Batmobile, but there are several important distinctions, some that put the historical emphasis on the “knight” in the Dark Knight, and some that put the “World’s Greatest Detective” more in league with the private eyes of American detective fiction.

For one thing, the medieval knight was a soldier for the king because he could afford to pay for armor, weapons, and a battle horse. He could afford to head off into battle instead of plowing the fields — and he could afford the time required for training between wars. The king didn’t pay him to be a knight. He paid the king for that honor. As far as we can tell, James Bond isn’t paying out of pocket for all those vodka martinis, and he certainly didn’t commission Q Branch for any of his gadgets. 007’s license to kill makes him a hired gun, even if he does restrict his paid murders to those sanctioned by his government.

Batman, on the other hand, pays his own way.

Like most of the medieval knights, his wealth originally came from privilege more than trade. The Waynes are old money. Even "stately Wayne Manor" suggests aristocracy, and where Superman’s Metropolis is shiningly new and forward looking, gothic Gotham is old, with deep roots in Europe. Frames of Batman on the rooftops harken back to Quasimodo atop Notre Dame.

But while WayneCorp may well have risen on government contracts, Batman is not on the payroll. Bruce Wayne is spending his own money to fund his war on crime. This may put him in the ranks of the feudal warriors, but it sets him apart from agent 007.

Finally, who are the bad guys?

For Bond, they are the enemies of the state — meaning that they are whoever Her Majesty says they are. In both the books and films, they are invariably evil, so James Bond will look like the good guy when he finally defeats them, but ultimately the double-O agents are weapons: the government aims them at its enemies and pulls the trigger. We know full well from history who ends up in the cross hairs.

Even my favorite fictional private eyes, however independent and heroic they may prove to be, don’t go looking for trouble until a client hires them to do so.

But for Batman, the enemy is crime — not mere violators of legislation and statute law, not people who manufacture without regulation, trade without license, or copy digital patterns in violation of copyright. A true comic-book fanboy could probably dig through back issues and show us the exception, but I can’t recall Batman ever even picking on drug users.

For Batman, as for libertarians, a crime isn’t a crime without a victim. And it is the victims Batman is fighting for; they are proxies for the parents he was too young and scared to rescue from the back-alley gunman. In the versions of the backstory that I prefer, Batman can never avenge his parents’ deaths, so even the target of his vengeance is a proxy: not a human criminal but crime itself. And by "crime," I mean rights violations, violence against person and property.

The Dark Knight may be on a perpetual quest, but it is not for a king; it is for the people.

[Cross-posted at LibertarianStandard.com]

DarkKnightReturnsPostscript

My vision of Batman comes from Frank Miller’s masterpiece of 1985.

Movies, TV, and comics have all kept the Dark Knight dark ever since.

As left-leaning Grant Morrison writes in Supergods, “Frank Miller brought the Dark Age style into line with a newly confident right-leaning America. His monumental Batman was no bleeding-heart liberal but a rugged libertarian.”

I introduced my father to Frank Miller’s work the very day I discovered it. He was reading issue #1 next to me as I read issue #2. I believe he loved it almost as much as I did. If he had been putting Batman in the same camp as James Bond back in the 1970s, he could only have had TV’s Adam West in mind. The campy television Batman of the 1960s may as well have been a costumed cop.

Here’s Grant Morrison again:

In the fifty years since his creation, Batman had become a friend of law and order, but Miller restored his outlaw status to thrilling effect. A Batman wanted by crooks and cops alike made for a much more interesting protagonist…

still roaming the plains of Poland

Paul Cantor, The Invisible Hand in Popular CultureI pointed Professor Cantor to my Freeman article yesterday.

Here’s his wonderful reply:

This is a terrific article and thanks for sending it to me (and mentioning me in it). I’m glad to see that Thompson seems to be on board with us on these issues. I own his book but haven’t read it yet. It’s nearing the top of my "to read" pile, and you’ve pushed it up a few places. It’s good that we’re not alone on these issues.

As I recall what you wrote about radio, all this could have happened back in the 1920s if a subscriber model had been adopted for radio instead of the broadcasting model. Essentially, we’re finally getting where we should have been in the first place — real consumers for TV. I notice that young people now have no interest in seeing TV as broadcasted. They want direct access and know how to get it. When I was at Hans-Hermann Hoppe‘s recent conference in Turkey, I was amazed at how current the young people from central and eastern Europe were with American TV — maybe one episode behind on BREAKING BAD. When I asked: "Is BREAKING BAD broadcast in your country?" they stared at me as if I were saying: "Do dinosaurs still roam the plains of Poland?" They were getting the show — well, frankly, I don’t know how they were getting the show, but it was definitely online and quite possibly illegal.

Paul

The Freeman: “TV’s Third Golden Age”

FreemanHouseOfCards

Feature

TV’s Third Golden Age

Programming quality is inversely proportional to regulatory meddling

OCTOBER 09, 2013 by B.K. MARCUS

Television might be entering a new golden age — this one made possible by regulatory changes and market forces that put power in the hands of viewers, rather than cartelized networks.