when evil institutions do good things: the FCC’s PTAR law

StreetTVIn my Freeman article "TV’s Third Golden Age," the summary subtitle that the magazine chose was "Programming quality is inversely proportional to regulatory meddling." I couldn’t have said it better. But does that mean that everything the FCC does makes television worse?

All laws and regulations have unforeseen consequences. That usually means unintended damage, but there’s no law of history that says every unplanned outcome is pernicious.

If you’re an advocate of a free society — one in which all arrangements are voluntary and there is the least coercive interference from governments or other thugs — history will present you with an unending series of conundrums. Whom do you side with in the Protestant Reformation, for example? The Catholic Church banned books and tortured scholars, and their official structure is one of hierarchy and authority. Easy enemy, right? Clear-cut bad guy. But the Church had kept the State in check for centuries — and vice versa, permitting seeds of freedom to root and flourish in the gaps between power centers. Whereas the Protestant states tended to be more authoritarian than the Catholic ones, with Luther and Calvin (not to mention the Anglicans) advocating orthodoxy through force. There’s a reason all those Northern princes embraced the Reformation: they wanted a cozier partnership of church and state.

This is certainly not the history I was taught in my Protestant private schools.

Similarly, most of us were schooled to side with the Union in the Civil War, to see Lincoln as a savior and the Confederacy as pure evil. But as much as the war may have resulted, however accidentally, in emancipating slaves, it also obliterated civil liberties, centralized power, strengthened central banking and fiat currencies and — to borrow from Jeffrey Rogers Hummel’s great book title — enslaved free men.

"Father Abraham," as the pietists called him after his assassination, was a tyrant whose primary goal was always what he actually achieved: central power over an involuntary union. Recasting this guy as an abolitionist hero is one of the many perverse legacies of America’s official history. But it’s a mistake to simply reverse the Establishment’s verdict and claim that the Confederacy was heroic. Plenty of Johnny Rebs were fighting a righteous battle against what they rightly deemed to be foreign invaders, but even if you ignore the little problem of the South’s "peculiar institution," the Confederate government was no more liberal than its Northern rival. "While the Civil War saw the triumph in the North of Republican neo-mercantilism,” writes Hummel, “it saw the emergence in the South of full-blown State socialism.”

Reading history without taking sides may fit some scholarly ideal (actually, it seems to be a journalistic ideal created by the Progressive Movement to masquerade their views as the only unbiased ones), but it is not a realistic option. We cannot do value-free history. If we try, we instead hide or repress our biases, which makes them a greater threat to intellectual integrity.

Neither can we say, "a plague on both their houses," and retreat to the realm of pure theory, libertarian or otherwise. We have to live in the real world, and even if we are not activists or revolutionaries, the same intellectual integrity that must reject "neutrality" also requires that we occasionally explore the question of second-best or least-evil options.

I remember several years ago, when my very libertarian boss surprised me by speaking in favor of increased regulation of banking. His point was that the banks were not free-market institutions; they were government-created cartels enjoying a political privilege that protected them from the consequences of the market while they surreptitiously depleted our property and spoiled the price system that drives all progress in the material world. Ideally, he’d want the government out of banking altogether, but in the meantime having them do less damage was better than letting them do more.

It may seem anticlimactic to follow the Reformation, Civil War, and fractional-reserve banking with a little-known FCC rule about TV programming from almost half a century ago, but I’ve been reading television history for a while now (1, 2, 3, 4) as illustrative of larger patterns in political history.

The Prime Time Access Rule (PTAR) was a law instituted in 1970 to limit the amount of network programming allowed during TV’s most-watched evening hours.

According to industry analyst Les Brown, the PTAR was adopted

to break the network monopoly over prime time, to open a new market for independent producers who complained of being at the mercy of three customers, to stimulate the creation of new program forms, and to give the stations the opportunity to do their most significant local programming in the choicest viewing hours. (Les Brown’s Encyclopedia of Television)

If you still accept the official myth that the airwaves are "That most public of possessions given into the trust of the networks," as Harlan Ellison describes them in The Glass Teat, and that the federal government’s job is to manage the radio spectrum in the best interests of that public, then I’m sure you don’t see any problem with PTAR. (You can read my paper "Radio Free Rothbard" [HTML, PDFDownload PDF] for a debunking of this official piety.)

But a libertarian could easily jerk his or her knee in the opposite direction. How dare the central government tell private station owners what they can and can’t air on their own stations, right?

The problem with such an ahistorical take on the issue is that broadcast television was a creature of the state from the beginning. Radio may have had a nascent free-market stage in its development, but television was a state-managed cartel from the word go.

So am I saying that PTAR was a good thing? Is it like the possibly beneficial banking regulations imposed on a cartelized banking system? Should we view CBS versus FCC as the same sort of balance-of-power game that Church and State played before the early modern period of European history?

Maybe, but that’s not why I find PTAR an interesting case for the liberty-minded historian. As is so often the case with laws and regulations, PTAR’s main legacy is in its unintended consequences.

"Despite the best of intentions," writes historian Gary Edgerton in The Columbia History of American Television, "the PTAR failed in almost every respect when it was implemented in the fall of 1971."

[P]ractically no local productions or any programming innovations whatsoever were inspired by the PTAR. In addition, any increase in independently produced programming was mainly restricted to the reworking of previously canceled network series, such as Edward Gaylord’s Hee Haw and Lawrence Welk’s The Lawrence Welk Show.… Rather than locally produced programming, these kinds of first-run syndicated shows dominated the 7 to 8 P.M. time slot.

This renaissance of recently purged rural programming was certainly not the FCC’s goal, but the creation of the first-run-syndication model is one of the great unsung events in media history.

A quick note on terminology: to the extent that I knew the word "syndication" at all when I was growing up, I took it to be a fancy way of saying "reruns." For example, Paramount, the studio that bought the rights to Star Trek after the series was cancelled, sold the right to rerun the program directly to individual TV stations. When a local TV station buys a program directly from the studio instead of through the network system, that’s called syndication. But syndication isn’t limited to reruns. Studios created first-run TV programs for direct sale to local stations as far back as the 1950s, but they were the exception. The dominant syndication model was and is reruns. But two events created a surge of first-run syndication: (1) PTAR, and (2) the rural purge I obliquely alluded to above.

I write about the rural purge here, but I’ll summarize: as the 1960s turned into the 1970s, television network executives did an about-face on their entire approach to programming. In the 1960s, each network tried to win the largest possible viewership by avoiding controversy and appealing to the lowest common denominator in public tastes. This meant ignoring the rift between races, between generations, and between urban and rural sensibilities — what we now call red-state and blue-state values — in the ongoing culture wars. This approach was dubbed LOP (Least Objectionable Program) theory.

Basically, this theory posits that viewers watch TV no matter what, usually choosing the least objectionable show available to them. Furthermore, it assumes a limited number of programming choices for audiences to pick from and implies that networks, advertising agencies, and sponsors care little about quality when producing and distributing shows. (Gary Edgerton, The Columbia History of American Television)

By the end of the decade, however, NBC vice president Paul Klein (who had christened LOP theory just as its tenure was coming to an end), convinced advertisers that they should stop caring so much about total viewership and focus instead on demographics, specifically the Baby Boomers — young, politically radicalized, and increasingly urban TV viewers — who were most likely to spend the most money on the most products. CBS was winning the battle for ratings, but Klein pointed out that their audience was made up of old folks and hicks, whereas NBC was capturing the viewership of the up-and-comers.

Klein may have worked for NBC, but it was CBS who took his message to heart, quite dramatically. In 1970, the network rocked the TV world by cancelling its most reliably popular shows: Petticoat Junction, Green Acres, The Beverly Hillbillies, Mayberry RFD, Hee Haw, Lassie, and The Lawrence Welk Show.

In Television’s Second Gold Age, communications professor Robert J. Thompson writes,

CBS, in an effort to appeal to a younger audience made socially conscious by the turbulent 1960s, had dumped its hit rural comedies in the first years of the 1970s while their aging audiences were still placing them in Nielsen’s top twenty-five. Critics, who for the most part had loathed the likes of Petticoat Junction and Gomer Pyle, loved some of what replaced them.

I loved what replaced them, too: Mary Tyler Moore, All in the Family, M*A*S*H, and the like. "Several members of Congress," Wikipedia informs us, "expressed displeasure at some of the replacement shows, many of which … were not particularly family-friendly." But that was the point: the networks were no longer aiming to please the whole family: just the most reliable consumers.

But despite capitalism’s cartoonish reputation for catering only to the bloated hump of the bell curve, that’s not how the market really works. It is how a cartel works, and the broadcast networks behaved accordingly, both before and after the rural purge. In the 1950s and ’60s, they aimed for the largest possible viewership and to hell with minorities of any sort. The demographic revolution changed the target, but not the tactic: aim for the big soft mass. That’s certainly how the big players would behave in a free market, too, but the telltale sign of freedom in the economy is that the big players aren’t the only players. Fortunes are made in niche markets, too, so long as there aren’t barriers to entering those niches. As I’ve said, TV is descended from radio, and Hoover and his corporatist cronies had arranged it so that there could only be a few big players.

That’s where we come back to the FCC’s Prime Time Access Rule of 1970. PTAR created a hole at the fringe of the prime-time schedule, just as the rural purge was creating a hole in the market. All those fans of Hee Haw and Lawrence Welk didn’t just go away, and they didn’t stop spending their money on advertised products, either. Before PTAR, the multitude of fans of "rural" programming would have had to settle for mid-afternoon reruns of their favorite shows (the way Star Trek fans haunted its late-night reruns around this same time). But the rural fans didn’t have to settle for reruns, and they didn’t have to settle for mid afternoons or late nights. They could watch new episodes of Hee Haw or Lawrence Welk at 7 PM. In fact, those two shows continued to produce new episodes and the local stations, which were no longer allowed to buy from the networks for the early evening hours, bought first-run syndicated shows instead. The Lawrence Welk Show, which had started in the early 1950s, continued for another decade, until Welk retired in the early ’80s. And the repeats continue to run on PBS today. Hee Haw, believe it or not, continued to produce original shows for syndication until 1992.

I loved Mary Tyler Moore, and I didn’t care so much for Lawrence Welk, but what I really love is peaceful diversity, which cannot exist in a winner-takes-all competition. The rise of first-run syndication was a profound crack in the winner-takes-all edifice of network programming.

The strategy CBS, NBC, and ABC had gravitated toward for short-term success — namely, targeting specific demographics with their programming — also sowed the seeds of change where the TV industry as a whole would eventually move well beyond its mass market model. Over the next decade, a whole host of technological, industrial, and programming innovations would usher in an era predicated on an entirely new niche-market philosophy that essentially turned the vast majority of broadcasters into narrowcasters. (Gary Edgerton, The Columbia History of American Television)

This idea of "narrowcasting" is the basis of quality in entertainment (and freedom in political economy, but that’s another story).

I’m not out to sing the praises of the FCC for increasing economic competition and cultural diversity — these consequences were entirely unintended — but we do have to recognize PTAR as a pebble in Goliath’s sandle, distracting him for a moment from David’s sling.

Should we thank the GOP for quality TV?

GOPTVIn the comments section of my blog post "right-wing TV," Scott Lahti disputes my claim that "Each of the three golden ages began while Republicans were in the White House."

I will not argue over the question of when a golden age of television begins. It will always be a matter of opinion, not necessarily marked by the beginning of a particular TV series. I should have said that each of the three golden ages took place or flourished under a Republican president.

He also draws our attention to some details that Professor Thompson got wrong in the page and a half I quoted from his book Television’s Second Gold Age about the relationship between Republican presidential administrations and the quality of television drama.

Scott is correct about Laugh-In debuting while LBJ was still president, and about Hill Street Blues debuting five days before Reagan’s inauguration, not after.

But I’d like to argue that the connection that Professor Thompson draws between "conservative Republican" administrations and creative freedom on the small screen is valid.

Read more of this post

the golden age at twilight

Rod SerlingWhen I was in 5th grade, the teacher, Mr. Kelly, asked the class if anyone could tell him the definition of the word twilight. I raised my hand, excited to know the answer for once: “A dimension not only of sight and sound but of mind — a journey into a wondrous land whose boundaries are that of imagination…”

“You idiot!” interrupted Mr. Kelly. (Does the setting of New York City in the 1970s explain at all why the teacher talked to his pupils that way?) “That’s the Twilight Zone! — Twilight is the period between sunset and darkness…”

Oh, I thought. So that’s why the show is called the Twilight Zone. It’s an in-between thing.

I wonder if there are kids today who will some day tell a similar story — probably with a less ill-mannered teacher — where they answer the vocabulary question by stating that “twilight” is when high-school vampires are in love with teenage mortals.

When I was a kid, The Twilight Zone was the smartest television show I watched. And I watched a lot of TV. It had already been off the air for a decade, but so had most of my shows. I grew up in the 1970s watching the TV of the 1950s and ’60s on a portable black-and-white television set with antennas made of coat hangers and tinfoil.

I loved the plot twists, and I didn’t mind all the moralizing. Most of the television I watched was preachy — and kids are used to being preached at from all directions, not just their TV viewing — but unlike all the other shows I watched, The Twilight Zone dealt with mind-bending ideas, and its plots weren’t predictable, at least not to me. Each episode ended with a revelation, and I enjoyed trying to guess what it would be, though I seldom guessed right.

The critics had loved it from the beginning — well before the show became popular with viewers — and later critics ranked it as a high point in television history:

Read more of this post

Paramount thinking

HighlyIllogicalI’m listening to Paul Cantor’s lecture series Commerce and Culture while I bounce back and forth between a book he wrote (The Invisible Hand in Popular Culture: Liberty vs. Authority in American Film and TV) and a book he recommended to me (The Fourth Network: How FOX Broke the Rules and Reinvented Television).

So with my head in Hollywood, so to speak, my eye was drawn to this tidbit on the Wikipedia homepage this morning:

Did you know…

From Wikipedia’s newest content:

Never heard of it!

Here’s the last line of the summary: Read more of this post

stamping out dissent

NewMarianneStampMy fixation on female national personifications continues:

Socialist president François Hollande has successfully courted controversy in his Bastille Day announcement of a new national postage stamp.

Since 1944, each new French president has chosen a new illustration for France’s postage stamps — always an image of Marianne, the Phrygian-hat-wearing feminine symbol of the French Republic (the way the UK has Britannia and the US used to have Columbia before Uncle Sam elbowed her aside).

"I decided following my election," said Hollande, "that the Republic’s new stamp would have the face of youth, that it would be created by youth, and that it would be chosen by youth."

Chosen by youth? Read more of this post

a gift from the people of France?

LadyWithAPastI must apologize for an error in my recent post “on seeing Lady Liberty in Paris.”

On the subject of the Statue of Liberty I wrote, “The one in New York Harbor was a gift from the French government, so I can imagine Parisians consider Lady Liberty to be as much a French symbol as an American one.”

But the statue was not, in fact, a gift from the French government. I believe my mistake is based on a 20th-century reading of a 19th-century idiom.

The National Park Service, which maintains the monument, makes the claim I learned in grammar school: "The Statue of Liberty Enlightening the World was a gift of friendship from the people of France to the people of the United States.…"

To quote Max Borders from the Freeman, “There is probably no greater threat to real community than the conflation of community with State power.”

And yet that conflation surrounds us. I certainly grew up with it as a common refrain in my schooling. Most of the time when the teacher said “the people,” she meant the state.

Really, how can “the people of France” give anything to anyone? I just assumed it was the standard rhetorical trick, using the people as a euphemism for the government.

The real history turns out to be much more interesting. Read more of this post

on seeing Lady Liberty in Paris

BenFamilleAParisThis post comes from southern France, near the Pyrenees.

Natalie Goldberg, author of Writing Down the Bones (or was it Julia Cameron in The Artist’s Way?) noted that Ernest Hemingway and other American authors of the Lost Generation wrote about their home country from Paris, then wrote about France after returning to America.

That fits my own experience. I certainly think most about American culture and character when I’m abroad. All the little details, the background texture of life, are different in other countries: beyond the obvious differences, like the size, shape, and color of money and all the advertisements and instructions that are no longer in English (or in a noticeably different English if you’re visiting Britain), there are other more subtly alien aspects of mundane life, such as the shape of door handles, light switches, power outlets, and absolutely everything in the bathroom, including the question of whether all the "bathroom" amenities are together in one room or separated into two; I expect license plates to look different, but I’m caught off guard by how narrow the streets are, and how narrow the trucks and vans have to be in consequence — they’re driving on streets built for medieval horse carts. Different products are on display abroad; especially different are food and drink.

The longer you’re away and the more you’ve acclimated to these foreign details, the more of an adjustment it is to return home, too. I recall landing in the United States in the late 1980s, after half a year abroad, mostly in Israel but also visiting Egypt and Amsterdam. It was so weird to me that all the signs — street signs, store signs, billboards — everything was in English. I’d grown so used to understanding only maybe a tenth of all the written messages around me. Being surrounded by English felt like information overload — so much more than I wanted to know. The forest of neon signs in a foreign city center can be beautiful. Those same signs in my own language look garish.

My recent time in Paris was less of an adjustment. I’ve been there often enough that its mundane details are more familiar. But I still think more about America while walking Parisian streets. While crossing a bridge the other day, I saw, dwarfed by the Eiffel Tower behind her, a small version of the Statue of Liberty. The one in New York Harbor was a gift from the French government, so I can imagine Parisians consider Lady Liberty to be as much a French symbol as an American one.

But as I mentioned in the blog post "worshipping the wrong goddess," it’s hard for an American to see a lady with a torch and not think of her as "ours."

In fact, however, Lady Liberty’s appearance is much less uniquely American now than it used to be, certainly less clearly nationalist than Columbia, the feminine personification of America in popular use from 1776 through World War I, by which time, draped in the American flag — or rather, draped in classical robes with the very non-classical colors, stars, and stripes of the American flag — she implored Americans to sacrifice their individual interests for the sake of the bankers and crony capitalists whose investments were threatened by the war in Europe. That’s not how she put it, of course.

(note the "liberty cap")

(note the “liberty cap”)

Garance Franke-Ruta in the Atlantic attributes Columbia’s retreat from the American scene to a different cause: "Uncle Sam’s older, classier sister," she writes, "fell out of favor after women got the vote."

She does acknowledge other contributing factors:

Perhaps it had something to do with the rise of Lady Liberty as an icon, though in the 19th century the two were sometimes visually interchangeable, if not identical. Perhaps it had something to do with Columbia’s role beseeching citizens to endure hardship during the Great War.

But she clearly prefers the gender-war interpretation:

Or perhaps it was something bigger: Female national personifications in general fell out of vogue as women took on a growing role as emancipated citizens.

By "female national personifications," Franke-Ruta is referring not just to Columbia but also to the UK’s Britannia and France’s Marianne. Portraying nations as women was apparently in vogue in the 18th and 19th centuries, less so in the 20th. Franke-Ruta would like to see Columbia make a comeback in the 21st:

A century later, Columbia looks like a lady who knows how to lean in. Enough time has passed, it seems, that we might consider reviving her spirit, and returning her to the pantheon of America characters for the years to come.

ColumbiaSuffragette

The suffragettes adopted Columbia as a figure of strength and determination for the cause of women’s rights, and it is this application — Columbia’s ability to "lean in" — that appeals to Franke-Ruta most: "When the suffragettes donned robes and armor, they garbed themselves in her rebel warrior’s spirit."


The warrior’s spirit I see, but did Columbia manifest a rebel’s spirit as well? She did so in the 18th century, when Paul Revere and other American patriots invoked feminine personifications to represent both Britain and America in the colonial struggle before independence.

In her earliest representations, Columbia (or Lady Liberty — as Franke-Ruta acknowledges, "the two were sometimes visually interchangeable, if not identical") was an American Indian, sometimes dressed in classical robes, sometimes naked, as in this political cartoon:

RebelliousSlut

The proper English lady (an early interpretation of Britannia?) declares, "I’ll force you to Obedience, you Rebellious Slut."

The rebellious slut is defiant: "Liberty, Liberty forever, Mother, while I exist."

But where is there any rebel spirit in the Columbia of World War I? She has become an apologist of interventionist foreign policy and a manipulator of public sentiment — a source of sacrifice and guilt rather than backbone and righteous indignation.

I can’t speak to the withdrawal of Britannia or Marianne from popular nationalist semiology, but I disagree with Franke-Ruta about Columbia, both her diagnosis and her prescription.

Library of Congress researcher Ellen Berg describes the change from feminine, idealist Columbia to severe and scolding Uncle Sam as tracking the US government’s foreign-policy shift from noninterventionism to imperialism.

Around the same time, the previously fluid boundaries between Columbia (a symbol of the nation) and Lady Liberty (a symbol of, well, liberty) became more distinct. By the early 20th century, their separation was complete, with only one icon still clad in the Stars and Stripes, the other holding not a sword but a torch.

The American imagination did not reject the feminine, only the feminine warrior. The dissonance became too stark. The suffragettes may have embraced the lady with the sword and shield, but Americans more generally preferred the one with the flame of freedom. Let masculine Uncle Sam represent the narrower national interests; the feminine symbol stood, torch held high, for a much greater cause.

When I saw Lady Liberty in the Seine, I thought, Good! The French could use more liberty. So could we all.

To long for a return to the Columbia of 100 years ago is to seek a giant step backwards.