Why should we care about 1970’s Television?
Neil Postman’s Amusing Ourselves to Death is one of many works from the latter half of the 20th Century which bemoans what he calls the “Age of Show Business,” or what other people have called the culture of television. Why should we care about this today? The general argument of these books is that TV is intentionally shallow and vapid, playing to the lowest common denominator of it’s audience, therefore everyone who watches television will be shallow and vapid. Although similar in arguments to many of these types of works Dr. Postman’s unique contribution is that he makes a striking political argument about the interaction of television and our political will. Why is this work important in an age when television is becoming less and less the center of our culture? Why does this work stand out from all the others of it’s ilk? It is because Dr. Postman tries to convince us that our culture (that is to say our entertainment) determines our political behavior and robs us of our free will! Interestingly he wraps his arguments around a sort of prophetic competition between George Orwell’s 1984 and Aldous Huxley’s Brave New World which lends a sense of depth to his work. This also allows him to use literature (which he views as vastly superior) to criticize TV.
Almost the entire first half of this work is a defense of literature in which he provides his rational for why literature is vastly superior to television (and by extension more modern forms of video media that are accessed via the internet). Or as he says “I am not making a case for epistemological relativism. Some ways of truth-telling are better than others, and therefore have a healthier influence on the cultures that adopt them. Indeed, I hope to persuade you that the decline of a print-based epistemology and the accompanying rise of a television-based epistemology has had grave consequences for public life, that we are getting sillier by the minute.” Literature, or as he often calls it “typography” produces a sophisticated consumer of information because it forces the user to be an active participant while television is an “idiot’s delight” because it asks nothing but passivity from it’s users. These are common arguments that can be found in many works of the time, for example see Jerry Mander’s “Four Arguments for the Elimination of Television” among others.
The real value of this work lies in the second half in which Dr. Postman gives us the meat of his argument, that “in the Age of Television we have less to fear from government restraints than from television glut; that, in fact, we have no way of protecting ourselves from information disseminated by corporate America; and that, therefore, the battles for liberty must be fought on different terrains from where they once were.” This is not some nefarious plot to take power, simply a function of how television as big business works. While in the past America’s political culture consisted of reading Thomas Pains “Common Sense” or reading a transcript of the 1850’s Lincoln-Douglas debates today Americans watch 30 second political commercials. But where others criticize this as the “dumbing down” of America Dr. Postman goes a step further and sees this as a new “ideology”. In the “Age of Show Business” for the average person the very structure of political arguments have changed, they are visual, emotional, commercial and shallow. Citizens are controlled by the arguments because they are passive consumers rather than active participants. It is an “unintended consequence of a dramatic change in our modes of public conversation, But is an ideology nonetheless, for it imposes a way of life, a set of relations among people and ideas, about which there has been no consensus, no discussion and no opposition. Only compliance.” This is not Marshall McLuhan’s the medium is the message, this is the concept that the message is controlled by the type of media that delivers it. Not just the ideas, but the texture, the emotions and the ways of expressing arguments. Dr. Postman is not worried about Orwellian Newspeak because he does not see a malevolent force controlling the people, but rather a Huxleyan dystopia in which people have ceded their intellect to the mode of the media. I say “mode” rather than “control” because his point is not that people’s ideas are being controlled as you might think would happen in an authoritarian regime, but that the texture of peoples ideas follows the pattern set forth by the media. Since this pattern is inferior to what preceded it, so will our culture be inferior to what preceded us.
It has been 35 years since the first edition of this book, but it is more important than ever. Though the internet was in it’s infancy and YouTube did not exist when this was written the pattern of visual media was being established by television. Though their are many differences between then and now few have explored the effect of media on consumers in the way that this work does. It is subtle though not understated, it warns us that we must pay attention to the ways that we consume our media if we do not want to be consumed by our media.
———- Chapter 9 Reach Out and Elect Someone
The writing is clear and direct, he uses phrase such as “The point is” and “I mean to say”.
Ideas are placed in context which provides helpful background information.
His passion for the topic show through.
This is more of an exposition than an academic work so he provides no chain of logic to build his arguments.
He has a strong religious bias which might turn some people off.
Professor John Mullan examines the origins of the Gothic, explaining how the genre became one of the most popular of the late 18th and early 19th centuries, and the subsequent integration of Gothic elements into mainstream Victorian fiction.
Gothic fiction began as a sophisticated joke. Horace Walpole first applied the word ‘Gothic’ to a novel in the subtitle – ‘A Gothic Story’ – of The Castle of Otranto, published in 1764. When he used the word it meant something like ‘barbarous’, as well as ‘deriving from the Middle Ages’. Walpole pretended that the story itself was an antique relic, providing a preface in which a translator claims to have discovered the tale, published in Italian in 1529, ‘in the library of an ancient catholic family in the north of England’. The story itself, ‘founded on truth’, was written three or four centuries earlier still (Preface). Some readers were duly deceived by this fiction and aggrieved when it was revealed to be a modern ‘fake’.
The novel itself tells a supernatural tale in which Manfred, the gloomy Prince of Otranto, develops an irresistible passion for the beautiful young woman who was to have married his son and heir. The novel opens memorably with this son being crushed to death by the huge helmet from a statue of a previous Prince of Otranto, and throughout the novel the very fabric of the castle comes to supernatural life until villainy is defeated. Walpole, who made his own house at Strawberry Hill into a mock-Gothic building, had discovered a fictional territory that has been exploited ever since. Gothic involves the supernatural (or the promise of the supernatural), it often involves the discovery of mysterious elements of antiquity, and it usually takes its protagonists into strange or frightening old buildings.
The Mysteries of Udolpho
In the 1790s, novelists rediscovered what Walpole had imagined. The doyenne of Gothic novelists was Ann Radcliffe, and her most famous novel, The Mysteries of Udolpho (1794) took its title from the name of a fictional Italian castle where much of the action is set. Like Walpole, she created a brooding aristocratic villain, Montoni, to threaten her resourceful virgin heroine Emily with an unspeakable fate. All of Radcliffe’s novels are set in foreign lands, often with lengthy descriptions of sublime scenery. Udolpho is set amongst the dark and looming Apennine Mountains – Radcliffe derived her settings from travel books. On the title page of most of her novels was the description that was far more common than the word ‘gothic’: her usual subtitle was ‘A Romance’. Other Gothic novelists of the period used the same word for their tales, advertising their supernatural thrills. A publishing company, Minerva Press, grew up simply to provide an eager public with this new kind of fiction.
Radcliffe’s fiction was the natural target for Jane Austen’s satire in Northanger Abbey. The book’s novel-loving heroine, Catherine Morland, imposes on reality the Gothic plots with which she is familiar. In fact, Radcliffe’s mysteries all turn out to have natural, if complicated, explanations. Some critics, like Coleridge, complained about her timidity in this respect. Yet she had made a discovery: ‘gothic’ truly came alive in the thoughts and anxieties of her characters. Gothic has always been more about fear of the supernatural than the supernatural itself. Other Gothic novelists were less circumspect than Radliffe. Matthew Lewis’s The Monk(1796), was an experiment in how outrageous a Gothic novelist can be. After a parade of ghosts, demons and sexually inflamed monks, it has a final guest appearance by Satan himself.
Frankenstein and the double
A second wave of Gothic novels in the second and third decades of the 19th century established new conventions. Mary Shelley’s Frankenstein (1818) gave a scientific form to the supernatural formula. Charles Maturin’s Melmoth the Wanderer (1820) featured a Byronic anti-hero who had sold his soul for a prolonged life. And James Hogg’s elaborately titled The Private Memoirs and Confessions of a Justified Sinner (1824) is the story of a man pursued by his own double. A character’s sense of encountering a double of him- or herself, also essential to Frankenstein, was established as a powerful new Gothic motif. Doubles crop up throughout Gothic fiction, the most famous example being the late 19th-century Gothic novella, Robert Louis Stevenson’s Strange Case of Dr Jekyll and Mr Hyde.
This motif is one of the reasons why Sigmund Freud’s concept of the uncanny (or unheimlich, as it is in German) is often applied to Gothic fiction. In his 1919 paper on ‘The Uncanny’ Freud drew his examples from the Gothic tales of E T A Hoffmann in order to account for the special feeling of disquiet – the sense of the uncanny – that they aroused. He argued that the making strange of what should be familiar is essential to this, and that it is disturbing and fascinating because it recalls us to our original infantile separation from or origin in the womb.
Extreme psychological states and horror
Another writer who commonly exploited doubles in his Gothic tales was the American Edgar Allan Poe. He used many of the standard properties of Gothic (medieval settings, castles and ancient houses, aristocratic corruption) but turned these into an exploration of extreme psychological states. He was attracted to the genre because he was fascinated by fear. In his hands Gothic was becoming ‘horror’, a term properly applied to the most famous late-Victorian example of Gothic, Bram Stoker’s Dracula. The opening section of Dracula uses some familiar Gothic properties: the castle whose chambers contain the mystery that the protagonist must solve; the sublime scenery that emphasises his isolation. Stoker learned from the vampire stories that had appeared earlier in the 19th century (notably Carmilla (1872) by Sheridan Le Fanu, who was his friend and collaborator) and exploited the narrative methods of Wilkie Collins’s ‘sensation fiction’. Dracula is written in the form of journal entries and letters by various characters, caught up in the horror of events. The fear and uncertainty on which Gothic had always relied is enacted in the narration.
The Gothic in mainstream Victorian fiction
Meanwhile Gothic had become so influential that we can detect its elements in much mainstream Victorian fiction. Both Emily and Charlotte Brontë included intimations of the supernatural within narratives that were otherwise attentive to the realities of time, place and material constraint. In the opening episode of Emily Brontë’s Wuthering Heights, the narrator, Lockwood, has to stay the night at Heathcliff’s house because of heavy snow. He finds Cathy’s diary, written as a child, and nods off while reading it. There follows a powerfully narrated nightmare in which an icy hand reaches to him through the window and the voice of Catherine Linton calls to be let in. The vision seems to prefigure what he will later discover about the history of Cathy and Heathcliff. Half in jest, Lockwood tells Heathcliff that Wuthering Heights is haunted; the novel, centred as it is on a house, seems to exploit in a new way the Gothic idea that entering an old building means entering the stories of those who have lived in it before.
Two of Charlotte Brontë’s novels, Jane Eyre and Villette, feature old buildings that appear to be haunted. As in the Gothic fiction of Ann Radliffe, the apparition seen by Jane Eyre in Thornfield Hall, where she is a governess, and the ghostly nun glimpsed by Lucy Snowe in the attic of the old Pensionnat where she teaches, have rational explanations. But Charlotte Brontë likes to raise the fears of her protagonists as to the presence of the supernatural, as if they were latterday Gothic heroines. Gothic still provides the vocabulary of apprehensiveness. Similarly, Wilkie Collins may have introduced into fiction, as Henry James said, ‘those most mysterious of mysteries, the mysteries which are at our own doors’, but he liked his reminders of traditional Gothic plots. In The Woman in White, all events turn out to be humanly contrived, yet the sudden appearance to the night-time walker of the figure of ‘a solitary Woman, dressed from head to foot in white garments’ haunts the reader as it does the narrator, Walter Hartright (ch. 4). The Moonstone is a detective story with a scientific explanation, but we never forget the legend that surrounds the diamond of the title, and the curse on those who steal it – a curse that seems to come true. The final triumph of Gothic is to become, as in these examples, a vital thread within novels that otherwise take pains to convince us of what is probable and rational.
- John Mullan
- John Mullan is Lord Northcliffe Professor of Modern English Literature at University College London. John is a specialist in 18th-century literature and is at present writing the volume of the Oxford English Literary History that will cover the period from 1709 to 1784. He also has research interests in the 19th century, and in 2012 published his book What Matters in Jane Austen?
The text in this article is available under the Creative Commons License.
Originally published by the British Library.
In 1930, a year into the Great Depression, John Maynard Keynes sat down to write about the economic possibilities of his grandchildren. Despite widespread gloom as the global economic order fell to its knees, the British economist remained upbeat, saying that the ‘prevailing world depression … blind[s] us to what is going on under the surface’. In his essay, he predicted that in 100 years’ time, ie 2030, society would have advanced so far that we would barely need to work. The main problem confronting countries such as Britain and the United States would be boredom, and people might need to ration out work in ‘three-hour shifts or a 15-hour week [to] put off the problem’. At first glance, Keynes seems to have done a woeful job of predicting the future. In 1930, the average worker in the US, the UK, Australia and Japan spent 45 to 48 hours at work. Today, that is still up around 38 hours.
Keynes has a legendary stature as one of the fathers of modern economics – responsible for much of how we think about monetary and fiscal policy. He is also famous for his quip at economists who deal only in long-term predictions: ‘In the long run, we are all dead.’ And his 15-hour working week prediction might have been more on the mark than it first appears.
If we wanted to produce as much as Keynes’s countrymen did in the 1930s, we wouldn’t need everyone to work even 15 hours per week. If you adjust for increases in labour productivity, it could be done in seven or eight hours, 10 in Japan (see graph below). These increases in productivity come from a century of automation and technological advances: allowing us to produce more stuff with less labour. In this sense, modern developed countries have way overshot Keynes prediction – we need to work only half the hours he predicted to match his lifestyle.
The progress over the past 90 years is not only apparent when considering workplace efficiency, but also when taking into account how much leisure time we enjoy. First consider retirement: a deal with yourself to work hard while you’re young and enjoy leisure time when you’re older. In 1930, most people never reached retirement age, simply labouring until they died. Today, people live well past retirement, living a third of their life work-free. If you take the work we do while we’re young and spread it across a total adult lifetime, it works out to less than 25 hours per week. There’s a second factor that boosts the amount of leisure time we enjoy: a reduction in housework. The ubiquity of washing machines, vacuum cleaners and microwave ovens means that the average US household does almost 30 hours less housework per week than in the 1930s. This 30 hours isn’t all converted into pure leisure. Indeed, some of it has been converted into regular work, as more women – who shoulder the major share of unpaid domestic labour – have moved into the paid labour force. The important thing is that, thanks to progress in productivity and efficiency, we all have more control over how we spend our time.
So if today’s advanced economies have reached (or even exceeded) the point of productivity that Keynes predicted, why are 30- to 40-hour weeks still standard in the workplace? And why doesn’t it feel like much has changed? This is a question about both human nature – our ever-increasing expectations of a good life – as well as how work is structured across societies.
Part of the answer is way-of-life inflation: humans have an insatiable appetite for more. Keynes spoke of solving ‘the economic problem, the struggle for subsistence’, but few people would choose to settle for mere subsistence. Humans live on a hedonic treadmill: we always want more. Rich Westerners could easily work 15 hours a week if we forgo the trappings of modern life: new clothes and Netflix and overseas holidays. This might seem trite when talking about consumer goods, but our lives are better across many other important dimensions, too. The same logic that applies to Netflix also applies to vaccines, refrigerators, renewable energy and affordable toothbrushes. Globally, people enjoy a standard of living much higher than in 1930 (and nowhere is this more true than in the Western countries that Keynes wrote about). We would not be content with a good life by our grandparents’ standards.
We also have more people working in jobs that are several steps removed from subsistence production. As economies become more productive, employment shifts from agriculture and manufacturing to service industries. Thanks to technological and productivity progress, we can deal with all of our subsistence needs with very little labour, freeing us for other things. Many people today work as mental health counsellors, visual effects artists, accountants, vloggers – and all of them do work that is not required for subsistence. Keynes’s essay argues that more people will be able to pursue ‘the arts of life as well as the activities of purpose’ in the future, implicitly framing these activities as separate from the menial world of subsistence work. In actual fact, the world of work has simply expanded to include more activities – such as care work, the arts and customer service – that did not feature significantly in Keynes’s estimation of solving the problem of economic subsistence.
Finally, persistent social inequality also helps the 40-hour week persist. Many people have to work 30- to 40-hour weeks simply to get by. As a society, on aggregate, we are able to produce enough for everyone. But unless the distribution of wealth becomes more equal, very few people can afford to cut back to a 15-hour working week. In some countries, such as the US, the link between productivity and pay has broken: recent increases in productivity benefit only the top tier of society. In his essay, Keynes predicted the opposite: a levelling and equalisation, where people would work to ensure other peoples’ needs were met. In one sense, you can see this in the social safety nets that didn’t exist back in 1930. Programmes such as social security and public housing help people get over the low bar of the ‘economic problem’ of base subsistence, but they are insufficient to properly lift people out of poverty, and insufficient to meet Keynes’s ideal of giving everyone a good life.
In his essay, Keynes disdained some of the core tendencies of capitalism, calling the money motive ‘a somewhat disgusting morbidity’ and bemoaning that ‘we have exalted some of the most distasteful of human qualities’. Of course, these human qualities – ‘avarice and usury and precaution’ – drive progress forward. And striving for progress is no bad thing: even Keynes acknowledged that these tendencies are necessary to ‘lead us out of the tunnel of economic necessity’. But at some point we should look back to see how far we have come. Keynes was right about the amazing advancements his grandchildren would enjoy, but wrong about how this would change overall patterns of work and distribution, which remain stubbornly fixed. It doesn’t need to be so.
In developed countries, at least, we have the technology and tools for everyone to work less and still live highly prosperous lives, if only we structure our work and society towards that goal. Today’s discussions about the future of work quickly end up in fanciful predictions of total automation. More likely, there will continue to be new and varied jobs to fill a five-day work week. And so today’s discussions need to move beyond the old point about the marvels of technology, and truly ask: what is it all for? Without a conception of a good life, without a way to distinguish progress that’s important from that which keeps us on the hedonic treadmill, our collective inertia will mean that we never reach Keynes’s 15-hour working week.
This article was originally published at Aeon and has been republished under Creative Commons.
Liberals say that rising income inequality is hurting economic growth. Libertarians say that government regulation is to blame. Who’s right?Both, say Steven Teles and Brink Lindsey, who visited Stanford Graduate School of Business recently as part of its...
Human Rights Watch is seeking a Researcher and Advocate on Digital Rights to investigate, analyze, and advocate against human rights abuses related to online activities. The role of the Researcher and Advocate will include documenting and conveying the...
When you have to many booksLiving in a Silicon Valley cottage it became clear that something was going to have to go. Unfortunately for me that meant that my wife’s and my rather large collection of books was on the chopping block. So one weekend we took...
Nearly one of every four people in the US is religiously unaffiliated. David Mislin, Temple University Last fall, the nonpartisan Public Religion Research Institute noted the growing number of religiously unaffiliated Americans: Nearly one of every four...
Attribution: Science Friday
Celebrating the 25th anniversary of the X-Files(!) 21st Century Fox's game division FoxNext Games is about to release "The X-Files: Deep State". A role playing mystery/SF game that will transport you into the world of the FBI's paranormal investigators. ...
The original series’ Walter Koenig recently shared his opinion on modern Star Trek, including the three recent movies and Star Trek: Discovery.... Check it out: Koenig On Modern Star Trek
In its 1977 review, The Dallas Morning News called Chewbacca a "Wookie." Now, on the film's 40th anniversary, the long national nightmare has ended. On that note: We, too, have something to confess. (Image credit: Elaine Thompson/AP) Source: 40 Years After 'Star Wars'...
Good news everybody, Matt Groening is giving us the first new Futurama content in years! Bad news, it will be a mobile game. Let's just hope it is better then the Simpson's games! Still, I'm sure it will be worth checking out when it is released (hopefully) later this...
I want my anti-matter! It’s hard to make, hard to store, expensive and volatile. But damn it, I want some! “If you had some,” you might ask, “what would you do with it?” Really? Do you have to ask? I would use it for fuel to get me to Alpha Centauri. Or perhaps...
It is often a fine line between SF and horror, finer still as we approach Halloween! In the spirit of the holiday here is a tidbit for your pleasure! https://youtu.be/BefliMlEzZ8
"The only difference between Bush and Hitler is that Hitler was elected" - Kurt Vonnegut Whether you believe their is any truth in this quote or not you can't deny that is sounds like pure Vonnegut. Have any other SF authors wrote about the event of their lifetime...
As part of Pres. Obama's Global Entrepreneurship summit Stanford University will be hosting a conference on the future of Artificial Intelligence. This is an example of the acceleration of a trend we have seen in recent years in which the focus of A.I. has been...
If you're always looking for good content it is important to support the Indie community! So check this out: https://vimeo.com/163429017
Won't Get Fooled Again by The Who's Pete Townshend We'll be fighting in the streets With our children at our feet And the morals that they worship will be gone And the men who spurred us on Sit in judgement of all wrong They decide and the shotgun sings the song I'll...
Politics and Political Science
Key L=left R=Right M=moderate
crooks and liars (L)
The Economist (M)
The Hill (R)
Mother Jones (L)
Monthly Review (S)
The Nation (L)
The New American (R)
The New Republic (L)
The XX Committee (R)
Cato Institute Libe
Literature and Media
Physical Science and Technology