III. The Age of New Media
“…Indeed, the introduction of the printing press affected only one stage of cultural communication – the distribution of media…In contrast, the computer media revolution affects all stages of communication, including acquisition, manipulation, storage, and distribution; it also affects all types of media – texts, still images, moving images, sound, and spatial constructions...”
Lev Manovich, The Language of New Media[1]
III.1. Introduction
The term New Media was created by media scholar Lev Manovich of The Masschusetts Institute of Technology when he found traditional terminology inadequate to describe what some have called The Digital Revolution. In Manovich’s own words:[2] “New media calls for a new stage in media theory whose beginnings can be traced back to the revolutionary works of Harold Innis in the 1950s and Marshall McLuhan in the 1960’s...[3]
The limits of the conventional terminology are illustrated by this quote from cinema historian Jay Ruby, in which Ruby attempts to place the current technological phenomenon of New Media in a historical context while expressing concern about its potential impact on our lives:“We stand on the threshold of a telecommunications revolution- a revolution potentially as profound and far-reaching as the agricultural and industrial revolutions. The one significant difference between the present changes is that the telecommunications revolution is happening so fast, we can actually be aware of it… “[4]
Ruby was writing in 2005; it is fair to say that what he called a telecommunications revolution has been happening so fast that societies around the world and their citizens have had difficulty adjusting to it. In this chapter, we shall explore the evolution of digital ideology from the introduction of digital systems into industries requiring low band- width, like print, to the current transformation of high band-width media like film and television. This transformation is expanding geometrically around the world, and our traditional media cultures are mutating into what Professor Henry Jenkins has called Convergence Culture.[5]
To help place The Age of New Media in a political historical context, let us go back to the analog era of the 1970’s and UNESCO’s failed attempt to create a more equitable distribution of global information through the establishment of The New World Information and Communications Order. As shall be seen, political decision makers of that era were well aware of the importance of the control of dissemination of information; the swift and strong reaction from the United States and the United Kingdom to the NWICO provided ample evidence of their awareness, and the issue of media control remains contentious to this day. However, as shall be seen, there were even larger issues at stake.
III.2. The New World Information and Communications Order
The United Nations as we know it today was created by the victorious allies of World War II in October, 1945, with the core mandate of preventing another world war by providing a body for peaceful conflict resolution. Less known is perhaps that fact that, since 1942, the same victorious allies had been working on what became The United Nations Scientific and Cultural Organization, or UNESCO, with the core mandate of:“ forging a culture of peace by fostering the generation and exchange of knowledge, including scientific knowledge, through international cooperation , capacity building and technical assistance to its member states…”[6]
While this mandate might seem superficially benign and relatively non-controversial, UNESCO became a hotbed of contention as many developing nations gained their political independence in the 1960’s and began to seek the means of defining their own cultures in mass media.For example, freedom of the press has long been held sacrosanct in the West, and government interference in the communications media had been officially taboo since World War II. While media manipulation and organized disinformation campaigns existed, they were rarely overt.
However, according to Canadian media scholar Zoe Druick, the reality was that the Western powers had dominated global media since the end of World War II. While their official goal was “the free flow of information”, their real agenda was perpetuating the dominance of Western media dating back to the Colonial Era.[7]
UNESCO was an example. Starting with John Grierson and his successor Ross McLean, the first directors of UNESCOPublic Information were all former senior managers from the Canadian Film Board who followed the neo-colonial, pro-Western media philosophies of Grierson, while rejecting any attempts by non-aligned nations and the Soviet bloc to create alternative paradigms for the flow of information. However, as shall be seen, as more and more developing nations discovered they had the numbers to out-vote the former imperial powers running the UN Security Council in the United Nations General Assembly, they began to challenge this Western dominance in the 1970’s.
These countries soon realized they could expand the original mandate of the United Nations and deal with disputes in areas like economics and international trade by asserting that peaceful resolution of these conflicts within the United Nations system was essential to international peace.
This expansion of the UN mandate was not limited to world trade.
The first UN Conference on the Human Environment in Stockholm in 1972 signaled to the world that the United Nations was now truly the first world body capable of tackling any issue of importance to mankind. While this conference ultimately was able to accomplish little in terms of tangible results, it did send the message that there was only one world, and that universal problems could only be dealt with through universal multilateral action. [8]
The nations seeking to use the United Nations to resolve these universal problems were part of what was called TheNon-Aligned Movement, better known by the acronym NAM, and consisted of nations seeking ,“ to create an independent path in world politics that would not result in member states becoming pawns in the struggle between the major powers…”[9]
By 1974, the NAM and their allies had a large majority in the UN General Assembly, and after they gained the support of the Organization of Petroleum Exporting Countries, or OPEC, they created The United Nations Declaration on the Establishment of a New International Economic Order, or the NIEO, the first in a series of proposals with the expressed intention of “transforming the governance of the global economy to redirect more of the benefits of the transnational integration towards the developing nations…”[10]
In broad strokes, the NIEO proposed a radical redistribution of global wealth from the former colonial powers in the Northern Hemisphere, (referred to as “the North”) to their former colonies, (referred to as “the South.”) [11]The response from countries of the North was varied. Countries with social democratic governments like Sweden, Germany, Austria and Holland agreed to “incremental accommodation”, while the United States and the United Kingdom were less positive. American neoconservatives like Daniel Patrick Moynihan, Irving Kristol and others opposed the NIEO in no uncertain terms, claiming that that the goal of the NIEO was to “capture the structure of international organizations created by the United States at the conclusion of World War II...[12]
Over the next few years, the resulting international debate in assorted conferences and the United Nations General Assembly in New York was intense but inconclusive. While there were many bones of contention, the South’s insistence that all legal authority be transferred from international organizations to sovereign post-colonial nation states proved to be a major stumbling block for all the countries in the North, since it would mean the end of anything resembling international law.
Meanwhile, UNESCO was working on something called The New World Information and Communication Order (NWICO) in Paris. The expressed goal was to address the unbalanced coverage of the developing world by the globally dominant Western media with its focus on “a focus on events, rather than processes…”
A series of conferences led to what became known as The New Delhi Declaration of 1976: “1. The present global information flows are marked by a serious inadequacy and imbalance. The means of communication information are concentrated in a few countries. The great majority of countries are reduced to being passive recipients of information which is disseminated from a few countries.
2.This situation perpetuates the colonial era of dependence and domination. It confines judgements and decisions on what should be known, and how it should be made known, into the hands of a few.
3. The dissemination of information rests at present in the hands of a few agencies located in a few developed countries, and the rest of the peoples of the world are forced to see each other, and even themselves, through the medium of these agencies.
4. Just as political and economic dependence are legacies of the era of colonialism, so is the case of dependence in the field of information, which in turn retards the achievement of political and economic growth.
5. In a situation where the means of information are dominated and monopolized by a few, freedom of information really comes to mean freedom of those few to propagate information in the manner of their choosing and the virtual denial to the rest of the right to inform and be informed objectively and accurately.
6. Non-Aligned countries have, in particular, been the victims of this phenomenon. Their endeavors, individual and collective, for world peace, justice, and for the establishment of an equitable economic order, have been under-played or misrepresented by international news media. Their unity has sought to be eroded. Their efforts to safeguard their political and economic independence and stability have been denigrated…”[13]
In August of 1976, 87 NAM members endorsed the New Delhi Declaration at the 5th NAM Summit in Colombo, Sri Lanka, declaring,” A new international order in the fields of information and mass communication is as vital as a new international economic order.”[14]
In response, American publishers created The World Press Freedom Committee
( WPFC) for the express purpose of lobbying against the NWICO at subsequent UNESCO conferences in San Jose (1976), and Nairobi ( 1976). When the American representatives determined that they were outnumbered, they decided to make the best of a bad situation, saying that: “Worldwide, the New World Information Order could be good or bad. As the situation now stands, the United States has more to lose than any other nation if “The Order” becomes a fact. It should be noted, however, that the United States need not be a loser if appropriate steps are taken…[15]
Eventually lengthy negotiations between the parties formed the basis for what became known as The MacBride Commission Report, which provided a detailed series of proposals for the creation of the NWICO when published in 1980.[16]
However, the election of American president Ronald Reagan ended the negotiations; Reagan was a firm opponent of multilateral solutions to international problems, in general, and both the NWICO and the NIEO, specifically. As a result, both proposals were shelved indefinitely. In fairness, however, this author feels that the advocates of the NWICO made some fundamental mistakes that made it difficult even sympathizers in the North who supported the rationale behind the proposal to support the final version of the proposal. [17]
For example, rather than finding creative ways of producing new, alternative media on the grass roots level in the developing world, the NWICO proposed to rectify this imbalance from the top down - through Draconian government regulations which, among other things, would make it necessary for all journalists to obtain government-issued licenses to practice their profession in any country they visited. Such licenses would have meant the end to anything resembling press freedom, and critics of the NWICO seized on this obvious weakness in the proposal.
In this context, it seems the primary concern among the NWICO supporters was political control of media content; these supporters were quite ready to follow the Grierson paradigm for production and distribution of propaganda as long as they could control the narrative in their own countries. It was clear from the terminology used in the MacBride Commission Report that the authors of the NWICO were
politicians, and not media professionals or artists.
Likewise, the very name New World Information and Communications Order, with its sinister Orwellian undertones, made the whole proposal an easy target. Condemnation was virtually universal in the Western media, and this condemnation was followed by a series of attacks on UNESCO itself, including personal attacks on its Senegalese Director-General, Amadou Mahtar Mbow, whose penchant for high-living and autocratic management style had created many enemies both inside and outside the United Nations system.[18] Mbow attempted to respond with a vigorous defense, stating that he was the victim of a ‘veritable smear campaign”, and adding that, as the first ( and , at that time, the only) African senior manager in the UN system at the time, he was being treated “like an American black who has no rights…” [19]
In 1984, the United States left UNESCO, and was soon followed by the United Kingdom and Singapore. In 1987, Mbow was replaced by Fernando Mayor Zaragoza of Spain ; the New World Information Communication Order was thrown into the dustbin of historical obscurity. Unfortunately, many of the very real problems caused by the Western global media monopoly the proposal had sought to confront were forgotten in the North, though not in the nations of the South, which continue to deal with unbalanced media coverage to this day. [20]For evidence, all one needs to do is search what is today called American mainstream media [21]for a story about anything positive in Africa.An article in The Columbia Journalism Review from 2011 offers some illuminating statistics:
“US journalism continues to portray a continent of unending horrors. Last June, for example, Time magazine published graphic pictures of a naked woman from Sierra Leone dying in childbirth. Not long after, CNN did a story about two young Kenyan boys whose family is so poor they are forced to work delivering goats to a slaughterhouse for less than a penny per goat. Reinforcing the sense of economic misery, between May and September 2010 the ten most-read US newspapers and magazines carried 245 articles mentioning poverty in Africa, but only five mentioning gross domestic product growth…”[22]
III.3. A Virtual New World Information Communication Order
Ironically, today, almost four decades later, a genuine New World Information Communication Order has actually been created – though without any action by any United Nations organization or member state. Indeed, this New World Information Communication Order is fundamentally different from anything envisioned by the members of UNESCO, since it is technological rather than political.
This technology is universal in that it cannot discriminate and is available to anyone virtually anywhere, and it is democratic in that it is relatively inexpensive. However, this technology is also inherently subversive, since it is constantly evolving at a rapid rate, and is therefore all but impossible to control, threatening authoritarian governments intent on ruling with centralized control. It therefore comes as no surprise that many delegates to the most recent UNESCO spin-off to the NWICO, the 2019 World Summit on the Information Society, were primarily concerned with finding a way to monitor and control The Digital Revolution. [23] The digital genie is now out of the bottle, so to speak, and both governments and international organizations are desperately seeking ways to harness it. [24]Moving images of both trivial and dramatic events – including demonstrations, wars and atrocities from around the world - are recorded on the ground by individuals with cell phone cameras, and then uploaded onto websites which can be seen by virtually anyone - that is, anyone with access to the internet. [25]
However, even such images now seem trivial compared with the shocking British vote to leave the European Union – and the even more seismic 2016 election of American President Donald J. Trump. In both cases, conventional wisdom and the socio-economic establishments of the United Kingdom and the United States were defeated in important votes by insurrectionist campaigns lacking conventional political support in mass media or established political parties. Pundits and politicians, at a loss to explain why so many people apparently voted against their own interests, have been make sense of the results ever since. [26]
However, there does seem to be a consensus that New Media played a critical role; indeed , American and British intelligence agencies have found evidence of organized social media campaigns designed to influence the democratic process, as well as evidence that these social media campaigns had been engineered by foreign powers like Russia and/or Russian operatives.[27]
These charges are still being investigated, but there is little question that these results were a wakeup call for all who did not take New Media seriously, or even derided it as a diversion for children. We urgently need to understand some of the lessons and implications of these changes to be able to harness the energy of this telecommunications revolution to better serve the needs of the planet and its inhabitants rather than be manipulated or controlled by it.
The challenge is to observe and assess these phenomena as rapidly evolving as well as happening, and to then develop strategies that will not be out of date when actually applied. For governments and bureaucratic institutions, where decision-making can move at a glacial pace, this is perhaps the ultimate challenge.
A brief examination of some of the prevailing attitudes towards New Media might be useful. As shall be seen, these attitudes described here all have origins in the United States, which, thanks to institutions like The Massachusetts Institute of Technology on the East Coast and Silicon Valley on the West Coast, has become a center for research on both the benefits as well as the potential dangers of New Media.
As the late American cultural critic Professor Neil Postman commented:“… in cultures that have a high democratic ethos, relatively weak traditions, and a high receptivity to new technologies, everyone is inclined to be enthusiastic about technological change, believing that its benefits will eventually spread evenly among the entire population. Especially in the United States, where the lust for what is new has no bounds, do we find this childlike conviction most widely held…”[28]
III.4. The Cyber Utopians
The contemporary debate on pros and cons of the Digital Revolution can be divided into three fundamentally different schools of thought: The Cyber Utopians, The Cyber Agnostics and the Cyber Manichaeans.[29]
Let us first examine the basic arguments of the Cyber Utopians.
Defined by Oxford Dictionary as “A naïve optimism focusing on the internet’s positive political potential for participatory democracy and freedom”[30] , the Cyber Utopian argument runs along the following lines:
digital technology is changing the lives of people around the world and, in most cases, demonstrably for the better. For example, in developing countries with limited infrastructure, cellphones are an invaluable, and relatively cheap, communications tool used for everything from keeping in touch with families to banking and organizing political rallies by SMS. Farmers can find out the international prices for their crops, and can now find buyers online. In short, digital technology is making life easier for people around the world.
As countries get increased access to broadband and hi-speed internet, citizens can now also dispense with costly and bulky computers, and instead use their cellphones for internet communications, watching television programs and, as has been seen in protests , filming real time events and uploading them to websites like YouTube for instant mass communications and consumption by their peers.
According to the Cyber Utopians, digital technology has been a powerful democratizing force for people who had previously been living in pre-industrial conditions. Indeed, there are Cyber Utopians who see social media and the internet as a panacea which can cure all social ills ranging from weak infrastructure to authoritarian regimes. Prominent Cyber Utopians include American counter-cultural figures such as Stewart Brand, author of The Media Lab – Inventing the Future at M.I.T. Thanks to his best-selling Whole Earth Catalogue, which became a bible for the American counter-culture in the 1960’s and 1970’s, Brand has devoted much of his working life to finding ways to use new technologies to make our lives both more environmentally sustainable and independent from conventional technologies.
His philosophy might be described as Positivist, in the tradition of French philosopher Auguste Comte, and he generally seems to believe there is a positive technological solution to virtually every human problem – and he rarely mentions any potential downsides. As a result, even though Brand supports many common environmental concerns, hisadvocacy of both nuclear power and genetically modified crops have made him controversial in the environmental movement. [31]
Founded in 1985, The M.I.T. Media Lab, occupies a spectacular $45 million building designed by the renowned architect I.M.Pei. The first director was of the M.I.T. Media Lab was Nicholas Negroponte, who became famous in media circles for being the first to predict, “the digital convergence of three industries print/publishing,broadcast/entertainment, and computers – as well as the seismic shifts that this convergence would have on people, industries, society…”[32]
A more recent update on the activities of The M.I.T. Media Lab is Frank Moss’ 2012 book The Sorcerers and Their Apprentices. [33]Director of the M.I.T. Media Lab from 2006-2011, Moss has a business background as an executive with a variety of companies ranging from IBM, Apollo Computer, to Infinity Pharmaceuticals and his current venture, Bluefin Labs. Virtually all the activities depicted in his book involve sophisticated technical designs with generous corporate sponsors footing the bill. If there were ever any conflicts between, say, the designers and their corporate sponsors, Moss does not mention them. Given the volatile nature of creative individuals who are as passionate about their work as Moss says they are, this image of absolute harmony strains credulity.
In their book, Born Digital- The First Generation of Digital Natives, John Palfrey and Urs Gasser express a classic cyber utopian view when they declare that:“the digital revolution has already made this world a better place…we are at a crossroads. There are two possible paths before us- one in which we destroy what is great about the Internet, and one in which we make smart choices and head towards a bright future in a digital age…”[34]
The Digital Revolution and New Media have thus swept the world into a technological crossroads, and international decision makers have few road maps to help guide them. Indeed, all they possess is an optimistic belief that technological progress is good because all progress is inherently good - even though we know from many scientific studies of the environment that some technologies can cause major problems no one could have envisioned when these technologies were created.
III.5. Cyber Agnostics
The rosy Cyber Utopian scenario is opposed by the Cyber Agnostics, or those who reject the idea that the internet is inherently positive or negative, while being ready to point out some of the potential dangers. Perhaps the best-known advocate of Cyber Agnosticism is author Evgeny Morozov,[35] who criticizes the Cyber Utopians for failing to admit that the internet and social media guarantee nothing, and the New Media are only tools that can be used either for good or for evil. Like any other tool, Morozov argues that their value depends on who is using them, and how they are being used. Morozov urges that we adopt a more dispassionate approach in our evaluation of the internet - an approach he calls cyber agnosticism: “For cyber agnostics, the goodness or badness of the internet is besides the point altogether; individual technologies and practices are what deserve our attention.”[36]
Morozov introduces a healthy note of skepticism for all wishing to better understand and analyze the digital revolution; however, Morozov also overlooks the indisputable fact that any new technology changes us even as we use it, and is sometimes changing us in ways we never could have imagined. The content of the message being communicated is only part of the picture. Another, perhaps equally important part, is the effect of the technology being used on the user.
As Marshall McLuhan stated, “The medium, or process, of our time – electronic technology – is reshaping and restructuring patterns of social interdependence and every aspect of personal life… Societies have always been shaped more by the nature of the media by which men communicate than by the content of the communication…”[37]
The sheer speed of technological change has seemingly presented government authorities with an almost impossible challenge; while the new technologies offer such great promise of economic and social progress, there is simply not enough time to explore the negative implications of any given technology until after it is already in widespread use.
As we shall see, there are some scholars and social critics who
are both concerned and even alarmed by the pace and impact of
New Media on our lives and on society in general. Some call these contemporary Cassandras Cyber Manichaeans.
III.6. The Cyber Manichaeans
Sometimes disparaged as technological Luddites [38], Cyber Manichaeans are those who have the dystopian belief that our exponentially increasing reliance on digital technology is inherently destructive. The late American author and media critic Neil Postman, former Chair of the Department of Communications Arts and Sciences of New York University, might be described as an early Cyber Manichaean. A longtime critic of American television, Professor Postman wrote 16 books and gave many lectures on our relationship to media and technology. In a 1997 lecture in Chicago, Postmansuggested we ask seven basic questions when confronted with any new technology:
1. What is the problem for which this new technology is the solution?
2. Who benefits from the new technology?
3. What new problems will be caused by this solution?
4. Which people or institutions will be seriously harmed by this solution?
5. What changes in language are being caused by the new technology?
6. What people or institutions will acquire power or profit from this new technology?
7. What are the alternative uses of this new technology? [39]
As an example of a new technology designed to provide a solution to a specific problem, Postman cites the automobile. While the automobile has offered far more mobility then the horse, Postman notes that it has also created a host of problems like environmental pollution and the destruction of landscapes. This leads Postman to one of his favorite betes noires – American Vice-President Al Gore’s project known as The Information Superhighway. [40]At that time, Vice-President Gore was actively promoting the investment of billions of dollars in new computer technology around the country.
Postman counters by asking how much more information do the students actually need, and suggests that, if the goal is better schools, the money might be better spent on better classrooms, as well as more and better teachers. As he ruefully notes, the billions spent on the new computers and computer technology certainly benefitted the hi-tech sectors of the American economy, while providing the students with a glut of information that was of no discernible benefit.
Many new technologies intended to provide a solution to a specific problem have had unforeseen side effects. For example, in the pharmaceutical industry, medical authorities recognize that a given medication might work well for a specific condition, but that the same medication might also have very unpleasant side effects even worse than the original problem.
A classic and tragic case was the now notorious medication Thalidomide, initially touted as a wonder drug for insomnia by the pharmaceutical industry, but which later was discovered to cause horrendous birth defects in pregnant women. [41] Today, we hear regular reports of medications which have been discontinued due to unpleasant side effects – side effects that sometimes, as was the case with Thalidomide, are worse than the original problem to be solved.
The same potential for unintended side effects exists in contemporary communications technology. For example, while the now ubiquitous cell phone is recognized around the world as an invaluable tool for facilitating communications,[42] the American Cancer Society also currently recommends minimizing use of hand-held devices because radiation from cellphones may cause brain cancer. Likewise, most authorities now seem to agree that use of cell phones while driving impairs driver capacity, and some countries have banned this practice altogether.[43] [44]
The sheer speed of technological change has presented governments with an almost impossible challenge; while the new technologies offer such great promise of economic and social progress, there is simply not enough time to explore the negative implications of any given technology until after it is already in widespread use.
Such is the case with the internet. The popular demand for access to hi-speed internet has been so strong that all but the most repressive regimes have been forced to offer some form of access to their population, albeit frequently with some ambivalence.[45]
Half a century ago, American film critic Gene Youngblood, known for his love of avant-garde film as well as Stanley Kubrick’s visionary 2001, voiced his concerns about the growth of new technologies in his 1970 book Expanded Cinema:” We’ve learned from physics that the only anti-entropic force in the universe, or what is called negentropy (negative entropy) results from the process of feedback. Feedback exists between systems that are not closed, but rather open and contingent upon other systems…for most practical purposes, it is enough to say a system is “closed’ when entropy dominates the feedback process.”[46]
More recently, it is interesting to note that some previously prominent Cyber Utopians like Julian Assange have become Manichaean in their views of New Media In an op-ed piece in the International Herald Tribune, for example, Julian Assange described Google CEO Eric Schmidt and Jared Cohen’s new book The New Digital Age as “ a startlingly clear and provocative blueprint for technocratic imperialism”, and warns that “the erosion of individual privacy in the West and the attendant centralization of power make abuses inevitable.”[47]
In short, Assange seems to fear our societies are becoming what Neil Postman calls Technopolies, or cultures dominated by New Media monopolies controlling our lives in ways we could never imagine.In spite of such dire warnings, there seems to be little individuals or go can do to arrest the juggernaut of New Media.[48] Resisting is not an option; all one can do is attempt to be aware of potential problems and plan accordingly – and, if necessary, become a heretic.
In the early days of digital technology in the 1980s, the speed of change was limited primarily due to a lack of bandwidth. Lack of bandwidth means lower capacity to transmit information, and, as a result, the first media to be threatened have been those requiring low levels of electronic information, such as print and music. As noted earlier in this chapter, a little over three decades ago, Nicholas Negroponte, the first Director of the M.I.T. Media Lab, predicted what he called The Convergence of New Media.
Let us now examine the impact of The Convergence of New Media on the American Print Industry over the past three decades.
III. 7. The Print Industry
The explosion of New Media has affected The American Print Industry in a variety of ways. Since there is no doubt that there are currently several endangered species in the print medium, let us briefly examine the economic impact on some of these endangered species.
First on the endangered species list of print media is the newspaper, which was already struggling to survive after radio and television began to offer competing services and siphon off advertisers. New Media seems to have effectively provided the coup de grace. It was recently estimated that newspapers would no longer be commercially viable in the United States by the year 2016. [49] That date has come and gone; now it is 2019, and, while some newspapers still do exist, the prognosis for paper newsprint is not bright.
There was a common assumption was that electronic or digital newspapers would replace paper newspapers, and that consumption of electronic newsprint would be more or less the same as that of paper newsprint. However, according to a recent study by The Nieman Foundation of Harvard University, however, this does not appear to be the case: “For American daily newspapers, the story of the last decade-plus hasn’t been about mass closures — it’s been about mass shrinkage. The pace at which newspapers are shutting down isn’t much different from what it was in the late 20th century. Instead, just about every daily paper has gotten smaller — smaller newsroom, smaller budgets, smaller print runs, smaller page counts — year after year after year. It’s death by a thousand paper cuts…”[50]
While the decline in advertising was anticipated, what was not anticipated was the drop-off in readership numbers in the transfer from the print to the digital newspaper. Many readers who stopped buying newspapers apparently never returned. A study of the transfer from print to digital versions of the British newspaper The Independent concluded,”“Shutting down print doesn’t drive those readers to print-like consumption habits on digital devices. Instead, they become a lot like other digital readers — easily distracted, flitting from link to link, and a little allergic to depth…”[51]
These conclusions resemble those of the late American media scholar Nicholas Carr, who once compared the digital revolution to the invention of the Gutenberg printing press. Unlike his colleague Neil Postman, however, Carr was no cybernetic Luddite. Quite to the contrary, Carr was initially an avid user of New Media who become disenchanted when he began to notice the impact of his use of New Media on his own cognitive processes – an impact he describes in detail in his 2010 book, The Shallows: What the Internet is Doing to Our Brains.
In broad strokes, Carr laments the displacement of traditional print media such as newspapers and books by electronic media like the internet and digital tablets. He makes the case that, for centuries, the very activity of reading has trained our brains to concentrate for extended periods of time, thus enabling serious thought. Now, however, Carr fears that our reliance on digital media will cause us, as a species, to lose our capacity to concentrate.[52]As a result, Carr believes that future generations may be incapable of serious thought or contemplation – which most of us would agree would be a serious and adverse unintended side effect.
Carr’s concerns are echoed by professionals in the American medical field who have noticed an alarming rise of an estimated 25 % in cases of Attention Deficit Hyperactivity Disorder in the United States over the past decade – a rise that has coincided with the explosion of adolescent use of New Media. Current estimates of prevalence of ADHD in the United States are 11% among adolescents, and 4.1 % among adults. [53][54]This increase in diagnosed cases of ADHD worldwide has led to a corresponding increase in the prescription of medications like Ritalin to help alleviate the symptoms. However, the causes, diagnosis and treatment have been controversial outside the United States, and even American critics feel that the exportation of ADHD diagnoses around the world has not been based on medical practice, but on the exportation of American cultural mores among youth.
A recent article in The Huffington Post claims that: “Exporting American-based diagnoses like ADHD is really exporting American behavioral norms under the guise of medicine," Peter Conrad, professor of sociology at Brandeis University, told The Huffington Post. "With millions more kids (and adults) likely to be diagnosed with and treated for ADHD in the next decades we see the export of American behavioral norms worldwide. This may be more insidious than the globalization of American fast food or pop music, in that it comes in the name of proper mental health and behavior."[55]
As a result, in countries like France, ADHD is not even a recognized disorder; instead of prescribing strong medication for adolescents, the French government has instituted a nationwide ban on smartphones in elementary schools.[56]
The response by the French educational authorities to ADHD is similar to that of French newspaper publishers who have seen what has happened to their colleagues in the United States. Initially, these French newspaper publishers refused to participate in the Google Search engine; they did not see why they should hasten their own demise by giving Google data free of charge that Google would then use to augment its user base, thereby becoming more attractive to advertisers. These French publishers were absolutely correct, of course, and traditionally competitive newspapers and other media organizations subsequently joined forces in France in 2017 against both Google and Facebook:“Le Monde and Le Figaro, traditionally fierce newspaper rivals, are letting advertisers book digital ad campaigns across their combined portfolio, using the same display or video ad formats for the first time. At the same time, Lagardière, Prisma Media, Condé Nast, Le Parisien and broadcaster M6 are pooling audience data in an initiative involving around 15 publishers…”[57]
As shall be seen, this action in France is part of an on-going global conflict between countries and corporations battling American corporate giants of New Media like Google and Facebook, In this conflict, the United States has been a champion for some call a neo-liberal policy of de-regulation.[58]
The impact of this policy of de-regulation on the American Print industry has been overwhelming; the current New Media giant in the print industry is the colossus called Amazon.com, which started out in 1995 as an on-line book merchant. Now, 25 years later, Amazon has metastasized into the largest on-line purveyor of e-commerce in the United States, putting many retail outlets as well as bookstores out of business. Here is a chart from Bloomberg illustrating the spectacular growth of Amazon in less than 25 years:“In the late 1990s, Amazon expanded into other commoditized media products, starting with music and movies. Electronics and toys followed.
By the mid-2000s, Amazon’s growing network of warehouses also held kitchen items, sporting goods, video games, apparel and jewelry.
1999: Amazon begins stocking toys, electronics, home and kitchen goods, video games and software.
2018: Amazon is #1 in e-commerce, constituting 45 percent of the industry
E-Commerce
$524B”[59]
In short, in less than a quarter of a century, Amazon.com has gone from having a monopoly on American book sales to having a monopoly on American e-commerce.[60] In this context, it is important to note that, while one of the positive benefits of New Media includes the possibility of Self-Publishing, this massive concentration of power in one entity makes a fair resolution of any dispute difficult.[61]
To illustrate, a recent article by Nick Morgan in Forbes Magazine advises new authors of fiction that, while self-publishing will enable the author to keep about 70 % of all proceeds, the author will have to also assume the monumental task of marketing the book. Morgan recommends that:” Fiction writers should probably self-publish, since they’re going to have to market themselves anyway, but don’t mind doing so too much, on average. So self-publish and start marketing yourself and your fiction pronto…Non-fiction writers, on the other hand, may be better off attempting the traditional route, if – and this is a big if – their primary goal is to use the book as a calling card to do something else…”[62]
Here Morgan is referring to using the published book to get speaking engagements which he believes might prove more lucrative than book sales. However, Morgan then adds the trademark of an established publishing house can be invaluable, but that the only way to gain access to an established publishing house is through a literary agent:“Sadly, agents are almost as hard to get to know as publishers. Second, where the entire process of self-publishing can take as little as 30 minutes (but I don’t recommend it), with a traditional publisher you’re looking at 18 months to two years, typically…And third, after that two years, you’re still in the predicament of the self-publisher – you have to market the book…”[63]
In short, it seems that while New Media can facilitate the production of a hard copy of a book by an individual author, selling the book and getting renumeration remains a major challenge.
As we shall see, a similar situation seems to prevail in the next industry we shall examine - the American music industry.
III. 8. The Music Industry
The American music industry has been trying to cope with the onslaught of digital technology over the past three decades. As musicologists such as Barry Kernfeld have noted, the American music industry has fought every major technological innovation since the beginning of the recording industry in the 1930’s, seeking to criminalize practices which were filling a consumer need that corporate entities had been ignoring. These practices, like bootlegging, Kernfeld terms disobedient practices.[64]Writing in 2011, Kernfeld observed that the music industry invariably would find a way to incorporate some of these practices, and even employ some of those individuals previously labeled as criminals.
He also makes the case that the same pattern applies to the music industry’s current reaction to digital technology.For the music industry, two of the most terrifying aspects of digital audio technology have been that it: a) makes it easy for virtually anyone to make a decent quality recording using cheap equipment, like a DAT recorder; 2) it makes it possible for anyone to make a “perfect “copy of any recording – or a copy that is identical with the original.
Today, just as is the case with Self-Publishing in Print, thanks to digital technology anyone of reasonable intelligence and minimal means can now become both a music producer and distributor. One of the immediate results was the sudden explosion of the hip-hop movement in American urban centers in the 1980’s. Hip hop artists would sample bits of music created by more established artists and transform them into new works. This technique had aesthetic precedents in such techniques as Cubist collages and Bauhaus “cut-ups”, and was tolerated until the late 1980’s, when some of the hip hop artists became big stars and began to make a lot of money.
Then the music industry dropped a legal hammer in the form of a series of copyright prosecutions of hip hop artists, which all ended up as legal victories for the industry.[65]
Today, the American music industry vigorously enforces copyright laws by promoting prosecution of digital artists accused of sampling even a few seconds of a composition, leading to substantial economic penalties. Industry representatives often claim they are protecting their artists’ copyrights, but the truth is they are more often than not just protecting their own financial interests. In the process, some established musicians, such as the legendary George Clinton, have been financially ruined; meanwhile, the development of hip hop music, one of the most intriguing contemporary musical genres has been crippled. [66]
The music industry, noting that consumers were no longer buying CDs because they were sharing MP3 files with friends, then attempted to intimidate the same consumers by pursuing harsh penalties against individuals who shared music files with their friends for personal use, and not for profit. As a result, some young consumers were punished for indulging in what they thought was a harmless social activity.
Kernfeld sees these measures as desperate attempts to delay the inevitable; eventually the music industry would be forced to accept the realities of digital technology, just as they were once forced to accept the invention of analog tape decks and digital recorders. In short, while the industry may succeed in making a few examples of some unfortunate individuals, Kernfeld feels that the practice of file sharing is so prevalent around the world that it cannot be stopped by legal means. Indeed, the governments of many non-Western countries unofficially seem to see nothing wrong with file sharing, since the practice provides cheap entertainment for impoverished masses. And since the internet knows few borders, Kernfeld correctly predicted that the American music industry would be forced to find a compromise that enables the consumer to obtain music by downloading at home, while providing some revenue for the industry. There was no longer any market for CDs, and the music store had become obsolete. Kernfeld’s prediction came true in 2003, when the late Steve Jobs, founder of Apple, created the I-Tunes application, which allowed consumers to download songs for a small fee – 99 cents per song – which could then be played on the newly invented Apple I-Pod. Somehow Mr. Jobs then managed to convince most of the notoriously suspicious owners of the music companies to support his enterprise.[67]
The success of i-Tunes has been staggering. According to the on-line publication Lifewire,” Today, the iTunes Store is the largest seller of digital music in the U.S. and has sold over 10 billion songs…”[68]
In Europe, the Swedish company Spotify emerged as a successful alternative to i-Tunes. Originally created by a group of Swedes in Stockholm in 2008 as a system which would give paying subscribers access to Spotify playlists in Sweden, Spotify soon expanded services to the United Kingdom in 2009. The terms of service evolved as Spotify ironed out technical glitches and sought new possibilities for expansion.
Then, after years of negotiation, Spotify was able to enter the American market in July, 2011, and the company raised over $100 million in funding through the American investment bank Goldman Sachs.[69]The Spotify model differs from i-Tunes in that it offers unlimited access to Spotify music in exchange for a monthly subscription. The Spotify model has also proved very successful, as this 2018 article in the British tabloid The Mirror indicated:” Spotify is getting ready to go public this week and some have predicted the company could be worth around $25 billion (£18 billion) .At last count, the company has 140 million active users, more than 60 million of whom are paid subscribers to the Premium service. They get access to 30 million different songs and a range of specially curated playlists…”[70]
The growth of digital technology has created many challenges to the established music industry; for example, in recent years, a new generation of artists, like Radiohead, have realized they can now produce music and distribute music directly to their audience on their internet websites internet for free, bypassing the established music industry altogether. In theory, the artists would then make money either from concerts or from the sales of special products promoted on their websites. This would indeed be a radical transformation of the traditional music industry economic model, since it would eliminate the middle men altogether. However, artists within the music industry are less optimistic; pop star Liam Gallagher speaks for many when he wonders where the money will come from when fans expect music for free.[71]Indeed, veteran guitarists like Georg Wadenius of Sweden and his American colleague Steve Lukather doubtless speak for many colleagues when they say they do not trust the new system of royalties.[72]The fact that established artists often feel short-changed is hardly a new phenomenon in the American music business.
However, it is interesting to note that veteran artists like Wadenius and Lukather generation find it increasingly difficult to get their new music on the airwaves due not only to what they see as the record companies’ lack of commitment and investment in quality product, but also due to the new consumers’ inability to concentrate and appreciate musical quality.[73]
Of particular relevance to this dissertation is the fact that Mr. Rukather is echoing the concerns of Nicholas Carr regarding the deterioration of consumer attention span. As we shall see, this lament about the decline in consumer attention spans due to the rapid proliferation of New Media is common among media professionals in virtually all communications media forms.
Let now look at the impact of New Media on the American Television industry.
-ADVERTISEMENBut it isn't just the dwindling financial returns that have Lukather worried these days, it's the increasingly distracted way in which younger listeners approach their favorite music. "They make 'McRecords' for people who don’t even really listen," he lamented. "It’s background music for people to either find a mate or shake their heads while texting or Skyping or doing other things -- environmental noise for the multitasker. Gone are the days of loving , dissecting, discussing the inner workings of ’AN ALBUM' … sitting in silence while it plays, looking at the liner notes and the few photos inining what a magic place it must be to make such musicLooking at the increasingly crowded marketplace, Lukather sighed, "Too many people can make records. Period," going on to point out that "it is too easy to play ‘pretend pop star’ now, with all the fakery and Auto-Tune time-correction cut and paste, etc. F---, most young people don’t know how to play a song from top to bottom in
IV.9. The Television Industry
Since visual images require much more bandwidth than print or music, the digital revolution took longer to affect the motion picture and television industries. However, once the changes began, they moved with astonishing speed, and high-definition digital technology is now the standard for film and television production around the world. For most countries, the transition has been relatively smooth; once the initial speed bump of the cost of upgrading infrastructure from analog to digital was passed, business was initially able to go on as usual, albeit with improved audio and picture quality.
However, as of 2019, internet television streaming services like Netflix have begun to threaten the traditional business model for commercial television. Sponsors for commercial television are already dwindling, and, as they disappear, so will commercial television as we know it. When Tim Cook, CEO of Apple, Inc, began making hints about the creation of some form of i-TV in 2012, it seemed safe to assume that, in the near future, most consumers would soon be getting their television programs streamed over the internet.[74]
Today, this is indeed the case – but not through i- TV, but through streaming services like Netflix, which has no commercials. As consumers become accustomed to watching their favorite programs without commercial breaks, the commercial networks are desperately trying to find a solution which will keep both viewers and advertisers happy. Ed Davis, an executive with Fox Networks Group, says failing to address the issue of advertising in the face of digital competition risks having some viewers fall out of the habit of watching television completely: “If we don’t create a sustainable model for quality storytelling there are portions of the audience that become unavailable for marketing…” [75]
According to media journalist Andy Meek, it may already be too late. Meek claims that Netflix has already replaced commercial and cable television as most popular with subscribers: “The streaming video landscape is definitely going to look more interesting — and fragmented — than ever as 2019 continues to unfold, …the number of consumers subscribed to pay TV dropped from 73% to 67% last year — no surprise there — while Netflix usage (76%) surpassed cable and satellite for the first time…”[76]
With the writing already on the wall, media consumers around the world are flocking to Netflix, which is available for a monthly fee less than that of cable TV providers, and has no commercials. Viewership numbers offer a clear indication in the surge of Netflix popularity, as does the new phenomenon of binge-watching -watching a number of episodes of a series in a single sitting.[77] A recent study reveals that,“ Among viewers ages 18-29 and 30-44, those numbers grow, with 73 percent of TV watchers ages 18-29 and 69 percent of those ages 30-44 binge-watching television at least once a week…”
The fact that 73 % of 18-29 year old American television viewers say they practice binge-watching at least once a week is of particular relevance for anyone trying to determine the future of the media industries; one of the biggest challenges for the American entertainment industry has been how to reach the elusive younger demographic of Millennials who have grown up with New Media.[78]
As shall be seen, the television industry is not the only American media industry to be affected by the sudden the demographic fluctuation in media consumption habits.
IV.10. The Motion Picture Industry
For the American motion picture industry, digital technology has been a mixed blessing. While production techniques have been streamlined and made more efficient, production costs for commercial features have actually increased as producers aim for bigger blockbusters, hoping to cash in on ancillary markets and spin-offs.
This strategy necessitates determining the lowest commercial common denominator to reach the mass market, and forgetting about creativity. In his essay Conglomerate Aesthetics- Notes on the Disintegration of Film Language, American film critic David Denby describes the cinematic results:” Constant and incoherent movement: rushed editing strategies; feeble characterization; pastiche and hapless collage – these are the elements of conglomerate aesthetics and there’s something more going on here than bad filmmaking in such a collection of attention-getting swindles…What we have now is not just a raft of routine bad pictures but the first massively successful nihilistic cinema.”[79]
In 2012, the clock officially ran out on the analog motion picture industry. As Nick James wrote in the British cinema periodical Sight and Sound : “ January 2012 will apparently mark the point at which there will be more digital screens in the world industry than analog, and by the end of 2012 it is estimated that 35mm production’s share of the global market will decline to 37 per cent. What’s more, mainstream usage of 35mm will have vanished from the USA by the end of 2013, with Western Europe set to be all digital in the mainstream one year later.”[80]
As was the case with the American music industry, the American motion picture industry made the war on file sharing a top priority, and has embarked upon international crusades to shut down file sharing sites such as Limewire, Megaupload, Demonoid and Pirate Bay. Owners of these websites have been tracked down and arrested under international warrants in countries like New Zealand and Cambodia. Meanwhile, ISP providers in Europe have begun to police the downloading habits of their customers, punishing those who download from file sharing sites with fines and removal of access to internet.
Meanwhile, sales of DVDs are still falling every year, and soon the video store will be as obsolete as the music store and the bookstore.
Likewise, cinema attendance figures continue to decline around the world, as the increased availability of increasingly inexpensive widescreen HD digital televisions make staying at home to watch movies with family and friends a more convenient and, in the long run, a more economical option than going to the movie theatre.
According to the late American intellectual and film critic Susan Sontag , the decline of the international motion picture industry began over two decades ago:
” Cinema’s 100 years seem to have the shape of a life cycle: an inevitable birth, the steady accumulation of glories and the onset in the last decade of an ignominious, irreversible decline.. Cinema, once heralded as the art of the 20th century, seems now, as the century closes numerically, to be a decadent art. Perhaps it is not cinema that has ended but only cinephilia – the name of the very specific kind of love that cinema inspired…” [81]
Like Denby, Sontag attributed the decline of the medium primarily to the astronomical rise in production costs of Hollywood productions and the concurrent reliance on the huge blockbuster loaded with special effects and stars. She concludes her essay on a pessimistic note:” … if cinephilia is dead, then movies are dead too...”[82]
Nonetheless, today, a bit over a decade and a half later, cinema is still very much with us, albeit permeated by a digital technology which is changing the medium profoundly and dramatically. Among other things, digital effects have created the potential for entirely new dimensions of artifice of a kind the early French cinema pioneer George Melies could have only dreamt of. Many of the creative implications of digital technology are still unknown. For example, the legendary French cineaste Jean-Luc Godard, see a profound difference between the rhythmic flicker of analog film 24 frames per second, and the unbroken stream of digital light when moving digital images are projected on a screen. Philosophical issues aside, today there is little doubt that the motion picture viewing experience is being radically transformed from a group endeavor in a movie theatre to a very private one - on a cellphone or a laptop. Streaming services like Netflix have now become producers showing first-run features, which means the end of commercial motion pictures in inevitable.[83]
Indeed, there appears to be a growing consensus that the Twentieth Century art form known as cinema no longer exists. [84]In the words of James Monaco:” After ninety years of dominating the way we view our world – a long, tempestuous and rewarding life- cinema has quietly passed on...”[85]
III.11. Documentary in the Age of New Media
Ever since the conflict between Dziga Vertov and Sergei Eisenstein during the Russian Revolution almost a century ago, there has always been a certain tension between documentary and traditional motion picture entertainment. As documentary historian Erik Barnouw put it:” A politician who lives by mythologies may well look on the documentarist’s work as subversive. And indeed it is a kind of subversion- an essential one. And a difficult one…” [86]
In the American motion picture industry, the genre of documentary has always had a more practical problem; documentary was viewed as a bad economic proposition by mainstream Hollywood. The conventional wisdom was expressed by the iconic Hollywood mogul Sam Goldwyn when he said:” If you want to send a message, try Western Union!”[87]
Prior to the advent of digital technology, this conventional wisdom made perfect sense. Thanks to their high shooting ratio, documentaries were always costly; film stock itself was expensive, and the costs of film processing and prints were unavoidable. In short, documentaries, even without the cost of stars, were risky business, and could only be made with institutional support or the patronage of wealthy individuals. Documentary icons from Robert Flaherty to Dziga Vertov to John Grierson and Leni Riefenstahl were all only able to produce their films thanks to substantial institutional or corporate patronage, and documentary films were more often than not institutional or corporate prestige pieces. As a result, documentarians either had to compromise or, shut down production altogether.
As documentary historian Brian Winston has pointed out, even supposedly socially progressive documentarians such as Grierson actually had much more in common with propagandists such as Dziga Vertov and Germany’s Leni Riefenstahl than is generally recognized. Indeed, one might say that, rather than being subversive, Grierson’s productions were propaganda for the British Empire and the preservation of the British status quo.[88]
Likewise, the box office was always a problem. Even during the height of the cinema verite boom in America in the 1960’s, few documentaries turned a profit; the only exceptions were music-based epics like Mike Wadleigh’s Woodstock (1970) and Leacock/Pennebaker’s Don’t Look Back (1967), chronicling a historic tour of England by Bob Dylan. Meanwhile, an otherwise excellent Academy Award winning documentary about the Vietnam War like Peter Davis’ Hearts and Minds enjoyed very limited distribution, and widely seen. As documentary historian Jane M. Gaines notes,” Few of the classic documentaries have ever had mass audiences.”[89]
At that time, the average budget for a cinema verite documentary feature was about $300,000 – or about $3,000,000 today. In the 1960’s, this was a lot of money, since it only covered production and post-production. It did not include money for prints and advertising, which traditionally meant an additional doubling the production costs for any chance of commercial success. Finding a commercial producer or distributor willing to make that kind of investment for an uncertain return was difficult. While the Hollywood formula for box office success has evolved over the years, there have been some ingredients generally considered essential for success. In no particular order, they are: 1) the need for well crafted screenplays; 2) the need for good production value; 3) the need for stars; 4) the need for advertising and promotion.
Since documentaries lack most of these ingredients, the conventional Hollywood wisdom has always been that documentaries cannot make money. Self-financed documentaries have always been an option, of course – but only for those few fortunate individuals with unlimited access to discretionary income, as well as with financially self-destructive inclinations. In short, the bottom line has remained the bottom line, and the Goldwyn conventional wisdom has remained supreme. Elsewhere in the world, the few developing countries possessed the resources to afford the luxury of documentary production; they had other, more-pressing priorities, such as feeding their populations and developing their economies.
In the Western world, the only hope for documentary filmmakers has been either being awarded a grant or finding financing from publicly funded television networks like the BBC in England, Antenne 2 in France and PBS and affiliates in the US. [90] If filmmakers were well connected and persistent enough, they might get some funding – but never as a business proposition. As a result, documentarians interested in promoting social change with their work often found themselves in the awkward position of seeking financial support from the very institutions they wished to change.
This equation began to change with the emergence of high quality, but low-cost digital cameras and tape in the late 20th century. The cost of film and film processing was suddenly no longer a factor; one could purchase one hour of mini-dv tape for a one-time cost of less than $10. virtually anywhere in the world, and a documentarian could literally carry hundreds of hours of tape in a carry-on bag. Meanwhile, thanks to platforms like Apple’s Final Cut Pro, a documentarian could set up a professional editing suite in his or her home for as little as $20,000., and still produce a final product of professional quality.
Suddenly digital documentaries of all kinds began to proliferate, and, as Michael Moore’s box office hit Farenheit 9/11 (2004) demonstrated, not only was there a potentially lucrative commercial American market for documentaries, but that there was even a substantial commercial market for highly politicized documentaries with controversial content. Patricia Aufderheide explains:“ Fahrenheit 9/11 (2004), which had taken in $222 million worldwide by 2009, plays a unique role in documentary history. The film broke through what had been, in the U.S., a suffocating barrier of silence about the growing public sentiment against the war in Iraq, and, by gaining the top award at the Cannes Film Festival, registered international anti-war protest.…”[91]
With the cost of production and post production lowered by digital technology, an additional obstacle to the production of socially critical documentary has been the issue of copyright. The cost of stock footage was becoming so exorbitant that it was becoming virtually financially impossible to make a historical documentary using archival footage.As a result, documentarians in the United States wanting to make historical compilations organized and confronted the copyright issue head on. Unlike their colleagues in the music industry, they managed to create a Fair Use protocol establishing guidelines by which they could use copyrighted material without charge, a major victory.[92]
On January 19, 2012, the following item appeared in The New York Times: “Eastman Kodak, the 131-year-old film pioneer that has been struggling for years to adapt to an increasingly digital world, filed for bankruptcy protection early on Thursday.”[93]
Whether or not one had agreed with Susan Sontag’s 1995 assertion that movies were dead, the demise of Eastman Kodak in January, 2012, was the proverbial nail in the coffin. Film, as previously defined, was literally dead.
For most documentarians, as well as anyone else interested in making low-budget productions, the transition from analog film to digital cinema has been a liberation. In the last decades of the 20th century, many documentarians had had hopes that analog video would provide such a liberation, but they soon grew disillusioned. The analog cameras and editing equipment required to create images of broadcast quality were prohibitively expensive, and cost much more than corresponding film cameras and editing tables.
There was also the issue of filmic image quality. Even the best analog video had a flat, two-dimensional look that was anathema to cineastes, and there was also a significant generational quality loss whenever the material was duplicated.
As a result, some documentarians continued to shoot with film until the end of the millennium. In the early 21stcentury, digital technology improved in leaps and bounds; the remarkable ability of the digital image to simulate the film image, scratches and all, all but ended the aesthetic debate on image quality The fact that high quality digital production equipment is significantly cheaper than either film or analog video equipment has been equally important. For as little as $20,000 in equipment, a documentarian can now shoot and edit work of high visual and audio quality. The implications for both the documentarian and for society at large are significant. In a traditional Marxist sense, thanks to digital technology, the documentarian now has the means of production at his or her disposal. However, as is the case with Print, Music and Television, distribution remains more complicated.
In the words of James Monaco: “Today anyone can produce a book, film, record, magazine, newspaper…But can these newly empowered producers of media get their work read, seen or heard by large numbers of people?”[94]
One answer available to amateurs is to distribute informally either by DVD and internet platforms like www.YouTube.com. If a film is politically controversial, and government authorities decide to block a given website, there is also the final option of direct projection to intended audiences – a technique in the tradition of the Dziga Vertov- inspired Soviet Agit-Prop trains during the Russian Revolution, now known as narrowcasting. With a laptop, an LCD projector costing less than $2000, a sound system and a portable generator, digital cinema can be projected to audiences lacking both internet and electricity virtually anywhere in the world.[95]
A historical side note: there is an intriguing Marxist perspective on issue of digital duplication; writing in 1935-36, Walter Benjamin, the noted German art critic, distinguished between an original art work and a technologically reproduced work as follows: “The technological reproducibility of the art work changes the relation of the masses to art. The extremely backward attitude towards a Picasso painting changes into a highly progressive reaction to a Chaplin film.”[96]
In Benjamin’s eyes, the traditional bourgeois art world, with its premium on authenticity, was a ritualistic and very exclusive endeavor doomed to irrelevance by technological reproduction of art works:” Technological reproducibility emancipates the work of art from its parasitic subservience to ritual… As soon as the criterion of authenticity ceases to be applied to artistic production, the whole social function of art is revolutionized. Instead of being founded on ritual, it is based on a different practice: politics.”[97]
Given Benjamin’s views in this now famous essay, which was published after his death in World War II, it seems reasonable to conclude that, had he lived to see it, he would have considered digital technology revolutionary indeed.
We are now in the midst of a giant international legal war being fought between the traditional commercial entertainment industries and some governments, on the one hand, and the new digital information industry, on the other – popularly known as Hollywood vs. Silicon Valley. As Monaco comments:” As the Information Age became a reality and knowledge joined labor and capital in the social equation, ideology couldn’t keep up. It is more than coincidental that the rise of the microchip accompanied the end of the Cold War, a conjunction that Mikhail Gorbachev himself once pointed out.”[98]
Documentary in the Age of New Media cuts through ideological and conventional wisdom. For example, one of the biggest international hits for Netflix to date has been a 5 part documentary series by American director Ava DuVernay about a notorious case of racial injustice in New York titled When They See Us, which has already been seen by 23 million Netflix viewers around the world.[99]
On the other end of the socio-economic spectrum, young people around the world have been embracing New Media for their own self-expression. For example, in the United States over the past decade, the phenomenon of the Fashion Bloggers has grown from being amateur novelties to becoming major fashion industry players. Teen-agers started out making their own home videos modeling their favorite fashions and sharing with their friends on platforms like YouTube and Instagram; when their programs acquired a large following with many likes, the young entrepreneurs showed them to fashion houses and major retailers who , in turn, saw a way to find out what people were wearing on the street and connect with the elusive Millennial market.
They began to buy advertising on these new programs, and , in a few short years, the successful fashion blogs like www.ManRepeller.com [100] were being given the most prestigious seats at the major fashion shows in New York, Paris and Milan. In short, the fashion bloggers have turned the notoriously hierarchical and conservative fashion industry upside down, and become arbiters of public taste on a par with well established fashion publications like Vogue.[101]
Elsewhere in the world, attempts at using New Media for artistic and political self-expression have met with resistance from authorities. A 2013 article in the International Herald Tribune described how the new digital technology has created new opportunities for artists in Cuban cinema: “The global boom in digital filmmaking has rippled across Cuba over the past decade, letting filmmakers create their own work beyond the oversight of state-financed institutions. Independent movies have become a new means of expression in a country where, despite freedoms and economic reforms introduced by President Raul Castro since 2006, the state still carefully controls national press, television and radio, and access to the internet is very limited.[102]
It appeared that, in spite of official government disapproval, what used to be called underground cinema in the United States half a century ago was now alive and well in Cuba. As the article noted, this boom in Cuban digital cinema was symptomatic of a general international phenomenon. Unfortunately, 8 months later, in October, 2013, the Cuban government announced they were shutting down all of these independent cinemas, since they had not been licensed, and were therefore illegal.
In the beginning of the Millennium, there were some examples from parts of the world where people, confronted by oppressive regimes, attempted to create their own parallel, independent news networks, free from government censorship and control. These programs might be called Digital Newsreel. [103]
A visual record which contradicts an official version of an event can have a devastating effect on the credibility of the authorities, and the phenomenon of citizen journalists who attempting to provide such visual record of events became a reality, thanks to New Media distribution platforms such as YouTube and Facebook, just as some Cyber Utopians had predicted. Unfortunately, just as Cyber Agnostic Yevgeny Morozov have also predicted, when the authorities gained the technical capacity to respond, they cracked down with a vengeance on the citizen journalists and their fellow protesters.
Here are a few examples:
The Saffron Revolt in Burma in 2007: Individual Burmese, often at great risk to life and limb, recorded demonstrations by Burmese monks and others against the Burmese dictatorship, frequently just using cellphone cameras. The material was then uploaded onto an internet website based in Norway, and edited and redistributed in Burma on-line on The Democratic Voice of Burma website. Since the Democratic Voice of Burma frequently contradicted the official government version of events with visual evidence, the military government grew increasingly frustrated with this circumvention of their authority. Finally, in September, 2007, the Burmese junta took the drastic action of completely shutting down the internet in Burma. Since the complete shut down of the internet would have serious repercussions for any country, many anti-government protesters saw this as a classic case of cutting off one’s nose to spite one’s face, and therefore a victory of sorts.[104]
The 2010 Red Shirt protests in Thailand: After the Thai Army overthrew the democratically elected Prime Minister Thaksin Shinawatra and installed a government of their own choosing, there were periodic demonstrations by Thaksin’s supporters who were known as Red Shirts. These demonstrations came to a head in April, 2010, with the occupation of a business district in downtown Bangkok. The army sent in armored vehicles, but failed to disperse the demonstrators after a pitched battle. Instead, the soldiers fled and demonstrators managed to take over several armored vehicles. Red Shirt sympathizers taped the action, and produced DVDs with their version of the event which were then distributed and shown around the countryside with LCD projectors during Thai New Year celebrations on April 13 using the technique ofnarrow casting.
The defeat of the mighty army was a major propaganda victory for the Red Shirts; in 2011, after 5 years of military rule, democratic elections were finally held and the Red Shirt candidate, Yingluck Shinawatra, the sister of Thaksin, won in a landslide. However, in 2014, Yingluck Shinawatra was overthrown in a military coup led by army General Prayuth, who has ruled Thailand with an iron hand ever since; among other things, he has imported Chinese technology to maintain surveillance of the internet.[105]
The Arab Spring: One of the more intriguing aspects of the political phenomenon popularly known as The Arab Spring was the role played by New Media. While the relative importance of this role has been the subject of debate, there is a general consensus that so-called citizen journalists have been a factor, providing information to the public outside of official government channels through individual written and visual records of events on websites, blogs and other media forms. The regime of Egyptian dictator Hosni Mubarak reportedly found this phenomenon to be such a serious problem that it considered emulating the example of their Burmese colleagues in shutting down the internet in Egypt altogether, but relented for economic reasons.[106] Mubarak was eventually forced to resign, and, in the first democratic elections in Egyptian history, Mohammed Morsi, was elected president in 2012. However, one year later, Egyptian Army Chief Abdel Fatha El-Sisi assumed power in a bloody military coup and former President Morsi died under mysterious circumstances in June of 2019. As previously noted, Cyber Agnostic Yevgeny Morozov has issued stark warnings of future uses of the dangers of using the internet as tool against political oppression: “ The idea that the internet favors the oppressed rather than the oppressor is marred by what I call cyber-utopianism; a naïve belief in the emancipatory nature of online communication that stubbornly refuses to acknowledge its downside. It stems from the starry-eyed fervor of the 1990’s, when former hippies, by this time ensconced in some of the most prestigious universities in the world, went on an argumentative spree to prove that the internet could deliver what the 1960’s couldn’t…Cyber-utopians ambitiously set out to build a new and improved United Nations, only to end up with a digital Cirque de Soleil…”[107]
III.12. Conclusion
Thanks to the growth of New Media, The New World Information Order is becoming a reality, though in a far more anarchic form than the government representatives at the UNESCO conference in 1980 had envisioned. The struggle between those authorities who seek to impose government control and those who envision an unfettered digital media sphere remains far from resolved. Quite to the contrary, as the dispute over the 5G technology between the United States and the Chinese company Huawei, the world’s largest provider of internet services, has shown, we can expect an escalation of this conflict in the future.[108]
Unfortunately, it has also become clear that large American corporate entities of New Media like Google, Amazon and Facebook also have agendas that are far from transparent. For example, how exactly is one to interpret Facebook’s recently announced plans to start a bitcoin currency called Libra? [109]
However, since such questions are beyond the scope of this dissertation, the two following chapters will be six case studies of examples of Documentary In the Age of New Media. The major criterion for selection of these case studies is that they must be of works completed after the year 2000.
These case studies will be divided into two groups. The first, Institutional Documentary, will examine the work of two entities producing documentaries for institutional employers or clients – in this case, The United Nations. While institutions and corporations may vary somewhat, the author believe that the essential premises of Institutional andCorporate Documentary are the same: the client gives the documentarian an assignment to communicate a specific message to a specific audience; the documentarian’s task is to figure out how to do it most effectively.
The second group, Independent Documentary, will examine 2 long-form documentaries produced by the documentarians themselves.
The Case Studies:
Part I: Institutional Documentary
Case Study #1. United Nations Television)[110]
Case Study# 2. The MONUSCO Video Unit)[111]
Part II: Independent Documentary
Case Study#3. East Timor: Betrayal and Resurrection (2004)
Ted Folke, Director[112]
Case Study #4. Citizen Boilesen (2009)[113]
Chaim Litewski, Director
III.13. Appendix A. Notes
[1] Manovich, Lev (The Language of New Media) Massachusetts Institute of Technology Press, Cambridge, Massachusetts, and London, England.2001. P.19.Print.
2Manovich, ibid. p. 19
3 Ibid. p. 48
4 Ruby, Jay (The Ethics of Image Making) in New Challenges in Documentary, Alan Rosenthal, Editor; Manchester University Press, Manchester,2005, p.219.Print.
5 Jenkins, Henry (Convergence Culture – Where Old and New Media Collide, New York and London, New York University Press,2006, Print.
6 “... Upon the proposal of CAME, a United Nations Conference for the establishment of an educational and cultural organization (ECO/CONF) was convened in London from 1 to 16 November 1945. Scarcely had the war ended when the conference opened. It gathered together the representatives of forty-four countries who decided to create an organization that would embody a genuine culture of peace. In their eyes, the new organization must establish the “intellectual and moral solidarity of mankind” and, in so doing, prevent the outbreak of another world war…”from UNESCO’S History, https://en.unesco.org/about-us/introducing-unesco
7 Druick, Zoe ( UNESCO, Film and Education: Mediating Postwar Paradigms of Communication, in Useful Cinema, Acland, Charles R. and Haidee Watson, Eds.) Duke University Press, Durham and London, 2011. pp. 81-99. Print.
8 Link to the UN 30th Anniversary film, TO BE 30 (1976) written and directed by the author: https://vimeo.com/69173621. Video.
9 https://medium.com/nonviolenceny/developing-a-culture-of-peace-a-history-of-the-non-aligned-movement-8ecc35d0955d
10Gilman, Nils (The New International Economic Order: A Reintroduction, in The Humanity Journal, www.humanityjournal.com, March 19, 2015. Website,
11 Gilman, ibid.
12 ibid.
13 MacBride, S. et al(Many Voices, One World. Communications and Society Today and Tomorrow. Towards a new, more just , and more efficient world information
and communication order. Report by the International Commission for the Study of Communication Problems. London. Kogan Page; New York, Unipub; Paris, UNESCO, 1980. Print.
14 Mansell and Nordenstreng, http://www.itu.inst/wsis.index.html
15Nordenstreng, ibid. p.3
16 ibid. p.4)
17MacBride, S. et al, ibid.
18 Gilman, ibid.
19For example, when the author visited UNESCO Headquarters in Paris in 1978, he was startled to see that Mr. Mbow had transformed the top floor of the UNESCO building into his own personal penthouse, a flagrant breach of protocol.
20www.answers.com/topic/amadou-mahtar-m-bow
21 Opinions differ as to exactly why the US left UNESCO in 1984. Nordenstreng, for example, feels that it was a warning by the Americans to the international community to cease trying to use the UN for multilateral solutions. Others feel that the US left in protest against the recognition of Soviet historical sites. The United States rejoined UNESCO in 2002 under President George W. Bush following the attacks of 9/11 amid a push to boost international solidarity by the U.S. In 2019, the US withdrew from UNESCO a second time – this time ,along with Israel to protest UNESCO’s designation of the Palestinian city of Hebron as a World Heritage site. https://archpaper.com/2019/01/united-states-withdraws-from-unesco-again/ Website.
22“As the name suggests, mainstream media is everywhere, and encompasses television, print, radio and certainly the internet, in the form of online publications. For the most part, in the U.S., mainstream media can be traced to a few conglomerates that own a majority of television networks, newspapers, magazines and even major movie houses…”
https://smallbusiness.chron.com/mainstream-vs-alternative-media-21113.html
23 https://archives.cjr.org/reports/hiding_the_real_africa.php. Website.
24 As McLuhan wrote in 1968,”In the name of progress, our official culture is striving to force the new media to do the work of the old.” McLuhan, Marshall and Quentin Fiore, (The Medium is the Massage ) Bantam Press, 1968. p.81.Print.
25 https://www.itu.int/net4/wsis/forum/2019
26 2019 Internet penetration per continent varies from 89.4 % in North America to 37.3% in Africa, with Europe at 86.8 % and Asia at 51.8 % .Source:
https://www.internetworldstats.com/stats.htm
27 Wilber, Ken (Trump and a Post-Truth World) Shambala Publications, Boulder, Colorado, 2017. P. 25. Print.
28 https://www.theguardian.com/us-news/2018/feb/16/robert-mueller-russians-charged-election
29 Postman, Ibid, p. 11
30 Followers of the Swedish media debate may remember a similar breakdown of attitudes towards technology in the late Lasse Svanberg’s excellent book (Stålsparven – Om 90-talets medier och om Informationssamhället) Prisma, 1991, p.4. Print.
31https://www.oxfordreference.com/view/10.1093/acref/9780191803093.001.0001/acref-9780191803093-e-338
32Brand, Stewart (The Media Lab – Inventing the Future at M.I.T.) Penguin, New York, 1998. Print.
33 Moss, Frank (The Sorcerers and Their Apprentices – How the Digital Magicians at the MIT Media Lab Are Creating the Innovative Technologies That Will Transform Our Lives) Crown Business, New York, 2011. p. xi. Print.
34 Moss, ibid.
35 Palfrey, John and Urs Gasser (Born Digital- Understanding the First Generation of Digital Natives) Basic Books, New York, 2011, p.7. Print.
36 Morozov, Evgeny (The Net Delusion) Public Affairs, New York, 2011, p. xii. Print.
37 Morozov, ibid. p.337
38 McLuhan and Fiore, ibid. p.8
39 https://www.merriam-webster.com/dictionary/Luddite
40 Postman, Neil, (The Surrender of Culture to Technopoly) Lecture at the University of Chicago, 1997, Video.https://youtu.be/hlrv7DIHllE
41 https://www.merriam-webster.com/dictionary/information%20superhighway
42 https://www.rxlist.com/thalomid-drug.htm
43According to the International Telecommunications Union, an estimated 86.7% of the world’s population had cellphone access in 2012. Mobithinking.com/mobile-marketing-tools/latest-mobile-stats/a#subscribers
44 Link to MONUSCO Video’s Driving While Distracted: https://vimeo.com/96985855
45www.ehow.com/list_6088733_cell-phone-side-effects-html/
46 https://www.whoishostingthis.com/blog/2016/11/09/north-korea-internet/)
47 Youngblood, Gene (Expanded Cinema) E.P. Dutton, New York. 1970. P.63.Print.
48 Assange, Julian (The Banality of ‘don’t do evil’) International Herald Tribune, June 3, 2013.Print.
49 Postman, ibid.
50 WCIC
52 ibid.
53Carr, Nicholas (The Shallows- What the Internet is Doing to Our Brains) W.W. Norton and Company, New York, 2011.Print.
55 https://www.huffpost.com/entry/the-global-explosion-of-a_n_6186776 asdfgHZKXL56 https://www.huffpost.com/entry/the-global-explosion-of-a_n_6186776 asdfgHZKXLJCVB
57 https://www.nytimes.com/2018/09/20/world/europe/france-smartphones-schools.html?smid=nytcore-ios- share
58 https://digiday.com/media/french-publishers-joining-forces-take-google-facebook/
59 Papathanassopoulos , Stylianos (Deregulation) National and Kapodistrian University of Athens, Greece, 2019
60https://www.bloomberg.com/graphics/2019-amazon-reach-across-markets/
61While analyzing the desirability of this concentration of economic power lies outside the scope of this dissertation, the author feels it is worth noting that similar concentrations of economic power are rapidly taking over most forms of New Media in America.
62 https://www.theverge.com/2014/5/23/5744804/authors-getting-screwed-in-face-off-between-amazon-hachette
63https://www.forbes.com/sites/nickmorgan/2016/05/05/which-is-better-self-publishing-or-traditional-publishing/#5369b5b511d9
64 Ibid.
65Kernfeld, Barry ( Pop Song Piracy – Disobedient Music Distribution Since 1929) University of Chicago Press, 2011.p 6.Print.
66 McLeod, Kembrew and Peter DiCola (Creative License – The Law and Culture of Digital Sampling) Duke University Press, 2011.Print.
67 McLeod and DiCola ( ibid)p.10
68 Isacsson, Walter(Steve Jobs) Simon and Schuster, New York, 2011. Print
69 https://www.lifewire.com/itunes-store-history-2438593
70 https://www.mirror.co.uk/tech/history-spotify-how-swedish-streaming-12291542
71Ibid.
72 http://adage.com/article/media/watch-liam-gallagher-explain-the-decline-of-the-music-industry-and-rock-stars-36-seconds/310551
73 https://ultimateclassicrock.com/steve-lukather-blasts-streaming-royalties/
74 In Rukather’s words:"With all the fakery and auto tune-time correction -cut and paste etc… fuck most young people don’t know how to play a song from top to bottom in a studio in tune and in time and with feeling!I am in the studios all the time and hear the stories from the producers and engineers and yet NO ONE cares that ’ so and so’ who sold a shit load of records ( how much IS that these days? ) can’t sing or play. They make ‘McRecords’ for people who don’t even really listen. It’s background music for people to either find a mate or shake their heads while texting or skyping or doing other things. Environmental noise for the multi-tasker. Gone are the days of loving, dissecting, discussing the inner workings of ’AN ALBUM”... sitting in silence while it plays.. looking at the liner notes and the few photo’s IN the studio … imagining what a magic place it music be to make such music...Gone. You need a fucking jeweler’s eye to read the credits IF one even cares. Most don’t. So if you keep blaming the ‘old antiquated artists’ who are the only REAL ones left…who MAY make a great record once in awhile but may be overlooked cause the media chooses to care more about who is super gluing meat to their bodies and other ridiculous HYPE and bullshit to get attention rather than LISTENING hard to the music being made we might be in a different place.
75http://www.idownloadblog.com/2012/06/01/munster-itv-in-the-works/
76https://www.latimes.com/business/hollywood/la-fi-ct-commercials-clutter-20180327-story.html
77https://bgr.com/2019/01/31/netflix-vs-cable-2018-statistics/
78https://techjury.net/stats-about/netflix/
79 https://morningconsult.com/2018/11/06/most-young-adults-have-an-appetite-for-binge-watching-shows/
80Denby, David (Do the Movies Have a Future?) Simon and Schuster, 2012, p.32.Print.
81James, Nick( Editorial in Sight and Sound, January, 2012) as quoted in David Thompson’s THE BIG SCREEN, Farrar, Straus and Giroux, 2012. P. 509. Print.
82Sontag, Susan (Frankfurter Rundschau, 1995) southerncrossreview.org/43/sontag-cinema.htm
83 Susan Sontag ,ibid.
84 https://www.theatlantic.com/business/archive/2016/06/hollywood-has-a-huge-millennial-problem/486209/
85 Even in 1987, the great Swedish director Ingmar Bergman was quoted by Lasse Svanberg as saying that he did not think the film medium would survive. Svanberg, ibid.p.74
86Monaco ibid.p. 421
87 Barnouw, ibid.pp.345-346.
88 The origins of this legendary Goldwynism are murky, like many Goldwynisms.
89 Renov, Michael ( The Subject of Documentary) University of Minnesota Press, 2004.p.135.Print.
90 Gaines , Jane M.( Collecting Visible Evidence) University of Minnesota Press,1999, p. 85.Print.
91 In Sweden, the basic options available to documentarians have been selling project to the commissioning editors of the various branches of Sveriges Television or getting a grant from Svenska Filminstitutet.
92 Audferheide, Patricia (Mainstream Documentary Film since 1999) in Blackwell’s History of American Film (Cynthia Lucia, ed), New York: Blackwell's, 2012 Print
93Aufderheide, Patricia and Peter Jaszi ( Reclaiming Fair Use –How to Put the Balance Back in Copright) University of Chicago Press, 2011. Chicago, [1] The New York Times, January 19, 2012
94The New York Times, January 19, 2012
95Monaco (ibid) p.479
96https://www.merriam-webster.com/words-at-play/broadcasting-and-narrowcasting
97Benjamin, Walter, (The Work of Art in the Age of Technological Reproduction and other Writings on Media) Belknap, Harvard University Press, 2008, p36. Print
98Benjamin, ibid. p.25
99 Monaco, ibid. p. 585
100 https://www.thejakartapost.com/life/2019/06/27/over-23-million-netflix-accounts-worldwide-tune-in-to-when-they-see-us.html
101 https://www.manrepeller.com
102https://www.huffpost.com/entry/how-to-start-a-fashion-blog-get-paid-for-blogging_b_578b7b4fe4b0b107a2414012
103 Burnett, Victoria (Cuban Filmmakers start rolling with Technology International Herald Tribune, January 9, 2013, pp 10-11
104 https://www.britannica.com/topic/newsreel
106 The June, 2013 protests in Taksim Square offered a good example of Digital Newsreels: http://www.youtube.com/watch?v=VQ1UKAyVqZI
107 https://www.nytimes.com/2019/06/17/world/middleeast/mohamed-morsi-dead.html
108 Morozov, ibid,p. xiii
109https://www.theguardian.com/technology/2019/may/21/there-will-be-conflict-huawei-founder-says-us-underestimates-companys-strength
110https://www.theguardian.com/commentisfree/2019/jun/23/libra-digital-currency-wont-set-us-free-it-will-further-enslave-us-to-facebook
111Link to sample: https://vimeo.com/148530413
112Link to sample: https://vimeo.com/197121708
113Link to sample: https://vimeo.com/69272262
114Link to sample: https://youtu.be/3bNvrCmeyec
115 Link to trailer: https://youtu.be/4WfWODjDYAk
116 Link to trailer: https://youtu.be/0NvrxUucNfg
.
No comments:
Post a Comment