central ink flint grupo it2 kodak3


Asociación Técnica de Diarios Latinoamericanos


Boletín Semanal Abril 22, 2018

Combate a las Fake News  es una lucha de acción y rápida reacción

Recent shifts in the media ecosystem raise new concerns about the vulnerability of democratic societies to fake news and the public’s limited ability to contain it. Fake news as a form of misinformation benefits from the fast pace that information travels in today’s media ecosystem, in particular across social media platforms. An abundance of information sources online leads individuals to rely heavily on heuristics and social cues in order to determine the credibility of information and to shape their beliefs, which are in turn extremely difficult to correct or change. The relatively small, but constantly changing, number of sources that produce misinformation on social media offers both a challenge for real-time detection algorithms and a promise for more targeted socio-technical interventions.

There are some possible pathways for reducing fake news, including:
(1) offering feedback to users that particular news may be fake (which seems to depress overall sharing from those individuals); (2) providing ideologically compatible sources that confirm that particular news is fake; (3) detecting information that is being promoted by bots and “cyborg” accounts and tuning algorithms to not respond to those manipulations; and (4) because a few sources may be the origin of most fake news, identifying those sources and reducing promotion (by the platforms) of information from those sources.
As a research community, we identified three courses of action that can be taken in the immediate future: involving more conservatives in the discussion of misinformation in politics, collaborating more closely with journalists in order to make the truth “louder,” and developing multidisciplinary community-wide shared resources for conducting academic research on the presence and dissemination of misinformation on social media platforms.

Moving forward, we must expand the study of social and cognitive interventions that minimize the effects of misinformation on individuals and communities, as well as of how socio-technical systems such as Google, YouTube, Facebook, and Twitter currently facilitate the spread of misinformation and what internal policies might reduce those effects. More broadly, we must investigate what the necessary ingredients are for information systems that encourage a culture of truth.

This report is organized as follows. Section 1 describes the state of misinformation in the current media ecosystem. Section 2 reviews research about the psychology of fake news and its spread in social systems as covered during the conference. Section 3 synthesizes the responses and discussions held during the conference into three courses of action that the academic community could take in the immediate future. Last, Section 4 describes areas of research that will improve our ability to tackle misinformation in the future. The conference schedule appears in an appendix.

The spread of false information became a topic of wide public concern during the 2016 U.S. election season. Propaganda, misinformation, and disinformation have been used throughout history to influence public opinion.[1]  Consider a Harper’s magazine piece in 1925 (titled “Fake news and the public”) decrying the rise of fake news:

Once the news faker obtains access to the press wires all the honest editors alive will not be able to repair the mischief he can do. An editor receiving a news item over the wire has no opportunity to test its authenticity as he would in the case of a local report. The offices of the members of The Associated Press in this country are connected with one another, and its centers of news gathering and distribution by a system of telegraph wires that in a single circuit would extend five times around the globe. This constitutes a very sensitive organism. Put your finger on it in New York, and it vibrates in San Francisco.

Substitute in Facebook and Google for The Associated Press, and these sentences could have been written today. The tectonic shifts of recent decades in the media ecosystem—most notably the rapid proliferation of online news and political opinion outlets, and especially social media—raise concerns anew about the vulnerability of democratic societies to fake news and other forms of misinformation. The shift of news consumption to online and social media platforms[2]  has disrupted traditional business models of journalism, causing many news outlets to shrink or close, while others struggle to adapt to new market realities. Longstanding media institutions have been weakened. Meanwhile, new channels of distribution have been developing faster than our abilities to understand or stabilize them.

A growing body of research provides evidence that fake news was prevalent in the political discourse leading up to the 2016 U.S. election. Initial reports suggest that some of the most widely shared stories on social media were fake (Silverman, 2016), and other findings show that the total volume of news shared by Americans from incredible and dubious sources is comparable in volume to news coming from individual mainstream sources such as The New York Times (Lazer, n.d.; although a limitation is that this research focused on dissemination and not consumption of information).

Current social media systems provide a fertile ground for the spread of misinformation that is particularly dangerous for political debate in a democratic society. Social media platforms provide a megaphone to anyone who can attract followers. This new power structure enables small numbers of individuals, armed with technical, social or political know-how, to distribute large volumes of disinformation, or “fake news.” Misinformation on social media is particularly potent and dangerous for two reasons: an abundance of sources and the creation of echo chambers. Assessing the credibility of information on social media is increasingly challenging due to the proliferation of information sources, aggravated by the unreliable social cues that accompany this information. The tendency of people to follow like-minded people leads to the creation of echo chambers and filter bubbles, which exacerbate polarization. With no conflicting information to counter the falsehoods or the general consensus within isolated social groups, the end result is a lack of shared reality, which may be divisive and dangerous to society (Benkler et al., 2017). Among other perils, such situations can enable discriminatory and inflammatory ideas to enter public discourse and be treated as fact. Once embedded, such ideas can in turn be used to create scapegoats, to normalize prejudices, to harden us-versus-them mentalities and even, in extreme cases, to catalyze and justify violence (Greenhill, forthcoming; Greenhill and Oppenheim, forthcoming).

A parallel, perhaps even larger, concern regarding the role of social media, particularly Facebook, is their broad reach beyond partisan ideologues to the far larger segment of the public that is less politically attentive and engaged, and hence less well-equipped to resist messages that conflict with their partisan predispositions (Zaller 1992), and more susceptible to persuasion from ideologically slanted news (Benedictis-Kessner, Baum, Berinsky, and Yamamoto 2017). This raises the possibility that the largest effects may emerge not among strong partisans, but among Independents and less-politically-motivated Americans.

Misinformation amplified by new technological means poses a threat to open societies worldwide. Information campaigns from Russia are overtly aiming to influence elections and destabilize liberal democracies, while those from the far right of the political spectrum are seeking greater control of ours. Yet if today’s technologies present new challenges, the general phenomenon of fake news is not new at all, nor are naked appeals to public fears and attempts to use information operations to influence political outcomes (Greenhill, forthcoming). Scholars have long studied the spread of misinformation and strategies for combating it, as we describe next

Most of us do not witness news events first hand, nor do we have direct exposure to the workings of politics. Instead, we rely on accounts of others; much of what we claim to know is actually distributed knowledge that has been acquired, stored, and transmitted by others. Likewise, much of our decision-making stems not from individual rationality but from shared group-level narratives (Sloman & Fernbach, 2017). As a result, our receptivity to information and misinformation depends less than we might expect on rational evaluation and more on the heuristics and social processes we describe below.

First, source credibility profoundly affects the social interpretation of information (Swire et al., 2017; Metzger et al., 2010; Berinsky, 2017, Baum and Groeling 2009; Greenhill and Oppenheim, n.d.). Individuals trust information coming from well-known or familiar sources and from sources that align with their worldview. Second, humans are biased information-seekers: we prefer to receive information that confirms our existing views. These properties combine to make people asymmetric updaters about political issues (Sunstein et al., 2016). Individuals tend to accept new information uncritically when a source is perceived as credible or the information confirms prior views. And when the information is unfamiliar or comes from an opposition source, it may be ignored.

As a result, correcting misinformation does not necessarily change people’s beliefs (Nyhan and Reifler, 2010; Flynn et al., 2016). In fact, presenting people with challenging information can even backfire, further entrenching people in their initial beliefs. However, even when an individual believes the correction, the misinformation may persist. An important implication of this point is that any repetition of misinformation, even in the context of refuting it, can be harmful (Thorson, 2015, Greenhill and Oppenheim, forthcoming). This persistence is due to familiarity and fluency biases in our cognitive processing: the more an individual hears a story, the more familiar it becomes, and the more likely the individual is to believe it as true (Hasher et al 1977; Schwartz et al, 2007; Pennycook et al., n.d.). As a result, exposure to misinformation can have long-term effects, while corrections may be short-lived.

One factor that does affect the acceptance of information is social pressure. Much of people’s behavior stems from social signaling and reputation preservation. Therefore, there is a real threat of embarrassment for sharing news that one’s peers perceive as fake. This threat provides an opening for fact-checking tools on social media, such as a pop-up warning under development by Facebook. This tool does seem to decrease sharing of disputed articles, but it is unlikely to have a lasting effect on beliefs (Schwartz et al, 2007; Pennycook et al., n.d.). While such tools provide a mechanism to signal that an individual is sharing fake news to their existing peers, another opportunity for intervention is to shift peer consumption online. Encouraging communication with people who are dissimilar might be an effective way to reduce polarization and fact distortion around political issues.

How Fake News Spreads

Fake news spreads from sources to consumers through a complex ecosystem of websites, social media, and bots. Features that make social media engaging, including the ease of sharing and rewiring social connections, facilitate their manipulation by highly active and partisan individuals (and bots) that become powerful sources of misinformation (Menczer, 2016).

The polarized and segregated structure observed in social media (Conover et al, 2011) is inevitable given two basic mechanisms of online sharing: social influence and unfriending (Sasahara et al., in preparation). The resulting echo chambers are highly homogeneous (Conover et al, 2011b), creating ideal conditions for selective exposure and confirmation bias. They are also extremely dense and clustered (Conover et al., 2012), so that messages can spread very efficiently and each user is exposed to the same message from many sources. Hoaxes have higher chances to go viral in these segregated communities (Tambuscio et al., in preparation).

Even if individuals prefer to share high-quality information, limited individual attention and information overload prevent social networks from discriminating between messages on the basis of quality at the system level, allowing low-quality information to spread as virally as high-quality information (Qiu et al., 2017). This helps explain higher exposure to fake news online.

It is possible to leverage structural, temporal, content, and user features to detect social bots (Varol et al., 2017). This reveals that social bots can become quite influential (Ferrara et al., 2016).  Bots are designed to amplify the reach of fake news (Shao et al., 2016) and exploit the vulnerabilities that stem from our cognitive and social biases. For example, they create the appearance of popular grassroots campaigns to manipulate attention, and target influential users to induce them to reshare misinformation (Ratkiewicz et al., 2011).

On Twitter, fake news shared by real people is concentrated in a small set of websites and highly active “cyborg” users (Lazer, n.d.). These users automatically share news from a set of sources (with or without reading them). Unlike traditional elites, these individuals sometimes wield limited socio-political capital but rather leverage their knowledge of platform affordances to grow a following around polarized and misinformative content. These individuals can, however, attempt to get the attention of political elites with the aid of social bots. For example, Donald Trump received hundreds of tweets, mostly from bots, with links to the fake news story that three million illegal immigrants voted in the election. This demonstrates how the power dynamics on social media can, in some cases, be reversed, leading misinformation to flow from lower status individuals to elites.

Contrary to popular intuition, both fake and real information, including news, is not often “viral” in the implied sense of spreading through long information cascades (Goel, Sharad, et al., 2015). That is, the vast majority of shared content does not spread in long cascades among average people. It’s often messages from celebrities and media sources—accounts with high numbers of followers—that increase reach the most, and do so via very shallow diffusion chains. Thus, traditional elites may not be the largest sharers of fake news content but may be the most important node capable of stemming its spread (Greenhill and Oppenheim, n.d.).

Most people who share fake news, whether it gains popularity or not, share lots of news in general. Volume of political activity is by far the strongest predictor of whether an individual will share a fake news story. The fact that misinformation is mixed with other content and that many stories get little attention from people means that traditional measures of quality cannot distinguish misinformation from truth (Metzger et al., 2010). Beyond this, certain characteristics of people are associated with greater likelihood of sharing fake news: older and more extreme individuals on the political spectrum appear to share fake news more than others (Lazer et al., n.d.).

Nation-states and politically-motivated organizations have long been the initial brokers of misinformation. Both contemporary and historical evidence suggests that the spread of impactful misinformation is rarely due to simple misunderstandings. Rather, misinformation is often the result of orchestrated and strategic campaigns that serve a particular political or military goal (Greenhill, forthcoming). For instance, the British waged an effective campaign of fake news around alleged German atrocities during WWI in order to mobilize domestic and global public opinion against Germany. These efforts, however, boomeranged during WWII, because memories of that fake news led to public skepticism, during WWII, of reports of mass murder (Schudson, 1997).

We must also acknowledge that a focus on impartiality is relatively new to how news is reported. Historically, it was not until the early 20th century that modern journalistic norms of fact-checking and impartiality began to take shape in the United States. It was a wide backlash against “yellow journalism”—sensationalist reporting styles that were spread by the 1890s newspaper empires of Hearst and Pulitzer—that pushed journalism to begin to professionalize and institute codes of ethics (Schudson, 2001).

Finally, while any group can come to believe false information, misinformation is currently predominantly a pathology of the right, and extreme voices from the right have been continuously attacking the mainstream media (Benkler et al., 2017). As a result, some conservative voters are even suspicious of fact-checking sites (Allcott and Gentzkow, 2017). This leaves them particularly susceptible to misinformation, which is being produced and repeated, in fact, by those same extreme voices. That said, there is at least anecdotal evidence that when Republicans are in power, the left becomes increasingly susceptible to promoting and accepting fake news. A case in point is a conspiracy theory, spread principally by the left during the Bush Administration, that the government was responsible for 9/11. This suggests that we may expect to witness a rise in left-wing-promulgated fake news over the next several years. Regardless, any solutions for combating fake news must take such asymmetry into account; different parts of the political spectrum are affected in different ways and will need to assume different roles to counter it.
Making the Discussion Bipartisan

Bringing more conservatives into the deliberation process about misinformation is an essential step in combating fake news and providing an unbiased scientific treatment to the research topic. Significant evidence suggests that fake news and misinformation impact, for the moment at least, predominantly the right side of the political spectrum (e.g., Lazer n.d., Benkler, 2017). Research suggests that error correction of fake news is most likely to be effective when coming from a co-partisan with whom one might expect to agree (Berinsky, 2017). Collaboration between conservatives and liberals to identify bases for factual agreement will therefore heighten the credibility of the endeavors, even where interpretations of facts differ. Some of the immediate steps suggested during the conference were to reach out to academics in law schools, economists who could speak to the business models of fake news, individuals who expressed opposition to the rise in distrust of the press, more center-right private institutions (e.g. Cato Institute, Koch Institute), and news outlets (e.g. Washington TimesWeekly StandardNational Review).
Making the Truth “Louder”

We need to strengthen trustworthy sources of information and find ways to support and partner with the media to increase the reach of high-quality, factual information. We propose several concrete ways to begin this process.

First, we need to translate existing research into a form that is digestible by journalists and public-facing organizations. Even findings that are well-established by social scientists, including those given above, are not yet widely known to this community. One immediate project would be to produce a short white paper summarizing our current understanding of misinformation—namely, factors by which it takes hold and best practices for preventing its spread. Research can provide guidelines for journalists on how to avoid common pitfalls in constructing stories and headlines—for instance, by leading with the facts, avoiding repeating falsehoods, seeking out audience-credible interviewees, and using visualizations to correct facts (Berinsky et al., 2017). Similarly, we know that trust in both institutions and facts is built through emotional connections and repetition. News organizations may find ways to strengthen relationships with their audiences and keep them better informed by emphasizing storytelling, repetition of facts (e.g., through follow-ups to previous stories), and impartial coverage (Boaden, 2010).

Second, we should seek stronger future collaborations between researchers and the media. One option is to support their working together in newsrooms, where researchers could both serve as in-house experts and gather data for applied research. Another route is to provide journalists with tools they need to lower the cost of data-based journalism. For example, proposals from the conference included a platform to provide journalists with crowd-sourced and curated data on emerging news stories, similar to Wikipedia. The resource would provide journalists with cheap and reliable sources of information so that well-sourced reporting can outpace the spread of misinformation on social media. Such tools could also provide pointers to data sources, background context for understanding meaningful statistics, civics information, or lists of experts to consult on a given topic.[3]
Third, the apparent concentration of circulated fake news (Lazer et al., n.d.) makes the identification of fake news and interventions by platforms pretty straightforward. While there are examples of fake news websites emerging from nowhere, in fact it may be that most fake news comes from a handful of websites. Identifying the responsibilities of the platforms and getting their proactive involvement will be essential in any major strategy to fight fake news. If platforms dampened the spread of information from just a few web sites, the fake news problem might drop precipitously overnight. Further, it appears that the spread of fake news is driven substantially by external manipulation, such as bots and “cyborgs” (individuals who have given control of their accounts to apps). Steps by the platforms to detect and respond to manipulation will also naturally dampen the spread of fake news.

Finally, since local journalism is both valuable and widely trusted, we should support efforts to strengthen local reporting in the face of tightening budgets. At the government level, such support could take the form of subsidies for local news outlets and help obtaining non-profit status. Universities can help by expanding their newspapers’ reporting on local communities.

As academics, we need better insight into the presence and spread of misinformation on social media platforms. In order to understand today’s technologies and prioritize the public interest over corporate and other interests, the academic community as a whole needs to be able to conduct research on these systems. Typically, however, accessing data for research is either impossible or difficult, whether due to platform constraints, constraints on sharing, or the size of the data. Consequently, it is difficult to conduct new research or replicate prior studies. At the same time, there is increasing concern that the algorithms on these platforms promoted misinformation during the election and may continue to do so in the future.

In order to investigate a range of possible solutions to the issue of misinformation on social media, academics must have better and more standardized access to data for research purposes. This pursuit must, of course, accept that there will be varying basis of cooperation with platform providers. With very little collaboration academics can still join forces to create a panel of people’s actions over time, ideally from multiple sources of online activity both mobile and non-mobile (e.g. MediaCloud, Volunteer Science, IBSEN, TurkServer). The cost for creating and maintaining such a panel can potentially be mitigated by partnering with companies that collect similar data. For example, we could seek out partnerships with companies that hold web panels (e.g. Nielsen, Microsoft, Google, ComScore), TV consumption (e.g. Nielsen), news consumption (e.g. Parsely, Chartbeat, The New York TimesThe Wall Street JournalThe Guardian), polling (e.g. Pollfish, YouGov, Pew), voter registration records (e.g. L2, Catalist, TargetSmart), and financial consumer records (e.g. Experian, Axciom, InfoUSA). Of course, partnerships with leading social media platforms such as Facebook and Twitter are possible. Twitter provides APIs that make public data available, but sharing agreements are needed to collect high-volume data samples. Additionally, Facebook would require custom APIs. With more accessible data for research purposes, academics can help platforms design more useful and informative tools for social news consumption.

Concretely, academics should focus on the social, institutional and technological infrastructures necessary to develop datasets that are useful for studying the spread of misinformation online and that can be shared for research purposes and replicability. Doing so will require new ways of pressuring social media companies to share important data.

The science of misinformation in applied form is still at its genesis. The conference discussions produced several directions for future work.
First, we must expand the study of social and cognitive interventions that minimize the effects of misinformation. Research raises doubts about the role of fact-checking and the effectiveness of corrections, yet there are currently few alternatives to that approach. Social interventions that do reduce the spread of misinformation have limited effects. Going forward, the field aims to identify social factors that sustain a culture of truth and to design interventions that help reward well-sourced news.

Technology in general and social media in particular does not only introduce new challenges for dealing with misinformation, but also offers a potential for mitigating it more effectively. Facebook’s recent initiative[4]  of partnering with fact-checking organizations to deliver warnings before people share disputed articles is one example of a technological intervention that can limit the spread of false information. However, questions remain regarding the scalability of the fact-checking approach and how to minimize the impact of fake news before it has been disputed. Furthermore, more research is needed in order to explore socio-technical interventions that not only stem the flow of misinformation, but also impact people’s beliefs that are a result of misinformation. More broadly, the question is: what are necessary ingredients for social information systems to encourage a culture that values and promotes truth?

Tackling the root causes behind common misperceptions can help inform people about the issues at hand and eliminate much of the misinformation surrounding them. Such educational efforts may be particularly beneficial for improving understandings of public policy, which is a necessary component for building trust in the institutions of a civic society. One of the ways to encourage individual thinking over group thinking is to ask individuals to explain the inner workings of certain processes or issues, for example, the national debt (Fernbach et al., 2013).

Previous research highlights two social aspects of interventions that are key to their success: the source of the intervention and the norms in the target community. The literature suggests that direct contradiction is counterproductive, as it may serve to entrench an individual in their beliefs if done in a threatening manner (Sunstein et al., 2016). This threat can be mitigated if statements come from somewhat similar sources, especially when they speak against their own self-interest. Moreover, people respond to information based on the shared narrative they observe from their community; to change opinions, it is therefore necessary to change social norms surrounding those opinions. One possibility is to “shame” the sharing of fake news—encourage the generation of factual information and discourage those who provide false content.

As an alternative to creating negative social impacts for sharing fake news within communities, forming bridges across communities may also foster the production of more neutral and factual content. Since at least some evidence suggests that people have the tendency to become similar to those they interact with, it’s essential to communicate with people across cultural divides. However, in doing so it is important to adhere to modern social psychological understandings of the conditions under which such interactions are likely to result in positive social interactions.

Finally, it is important to accept that not all individuals will be susceptible to intervention. People are asymmetrical updaters (Sunstein et al., 2016): those with extreme views may only be receptive to evidence that supports their view. Those with moderate views will more readily update their views in either direction based on new evidence. Focusing on this latter set of individuals may lead to the most fruitful forms of intervention. Alternatively, winning over partisans might undermine misinformation at its source, and thus offer a more robust long term strategy at combating the issue.

The conference touched on the cognitive, social and institutional constructs of misinformation from both long-standing and more recent work in a variety of disciplines. In this document, we outlined some of the relevant work on information processing, credibility assessment, corrections and their effectiveness, and the diffusion of misinformation on social networks. We described concrete steps for making the science of fake news more inclusive for researchers across the political spectrum, detailed strategies for making the truth “louder,” and introduced an interdisciplinary initiative for advancing the study of misinformation online. Finally, we recognized areas where additional research is needed to provide a better understanding of the fake news phenomenon and ways to mitigate it.

[1]  There is some ambiguity concerning the precise distinctions between “fake news” on the one hand, and ideologically slanted news, disinformation, misinformation, propaganda, etc. on the other. Here we define fake news as misinformation that has the trappings of traditional news media, with the presumed associated editorial processes. That said, more work is needed to develop as clear as possible a nomenclature for misinformation that, among other things, would allow scholars to more precisely define the phenomenon they are seeking to address.
[2]  For example, Pew Research found that 62 percent of Americans get news on social media, with 18 percent of people doing so often: http://www.journalism.org/2016/06/15/state-of-the-news-media-2016/
[3]  Numerous resources exist already, such as journalistsresource.orgengagingnewsproject.org, and datausa.io. Wikipedia co-founder Jimmy Wales, in turn, recently announced plans for a new online news publication, called WikiTribune, which would feature content written by professional journalists and edited by volunteer fact-checkers, with financial support coming from reader donations. So we would need to target definite gaps and perhaps work in conjunction with such a group.

Comportamiento del aviso digital a nivel mundial

Digital media advertising may still be dogged by issues like fraud, brand safety and dodgy measurement, but that’s not stopping the flow of ad dollars.

Digital advertising is expected to account for 77 cents of each new ad dollar in 2017, according to GroupM’s “Interaction 2017” report, out this week. Unsurprisingly, Google and Facebook are leading the pack. More than two-thirds of global ad spend growth from 2012 to 2016 came from those two companies.

In 2016, Google and Facebook swallowed 20 percent of the entire global media advertising pie, according to Zenith’s “Top Thirty Global Media Owners” report, also out this week.

And yet, a little sheen may have come off that lately, for Google at least. Confidentially, agency reps say Google has taken more of a hit from the YouTube crisis than it has admitted publicly. “There has been significant fallout from advertisers pulling spend from YouTube,” said one media agency CEO who spoke on condition of anonymity. “There are a lot of advertisers who would normally spend on YouTube who are still not. Google’s numbers are under pressure quarter on quarter.”
Here’s a look at the current state of global advertising spend, in five charts.

Google and Facebook set the pace
Google, under its holding group Alphabet, still leads Facebook, gobbling up $79.4 billion (£61.4 billion) in ad revenue in 2016, three times more than the social network, which took $26.9 billion (£21 billion) last year. In third place was Comcast, which took $12.9 billion (£10 billion) in ad revenue, according to the Zenith report.

Digital-only media owners took a decent chunk of the top 30 slots for biggest media owners by ad revenue. Among them were Verizon, Twitter, Yahoo, Microsoft and Baidu, which together generated $132.8 billion (£103 billion) in online ad revenue in 2016 — 73 percent of all digital ad spend and 24 percent of global ad spend across all media, according to Zenith.

“Zenith’s new ranking demonstrates just how much the internet advertising platforms are setting the pace for global ad spend growth,” said Jonathan Barnard, head of forecasting at Zenith.

Don’t write off Twitter, though
With all the talk of the duopoly dominating spend and Snapchat being the latest digital darling, Twitter has moved somewhat to the sidelines, as evidenced by several reports showing its recent ad revenue drops. But from 2012 to 2016, Zenith showed Twitter grew the fastest, increasing ad revenues by 734 percent. Tencent is second, growing by 697 percent over the same period to $4.3 million (£3.3 million) in 2016, and Facebook is third, with 528 percent growth to $27 million (£21 million), according to Zenith.

Advertising growth is concentrated in big cities
Ten cities will contribute 11 percent of all growth in global ad spend between 2016 and 2019, according to a separate report from Zenith called “Advertising Expenditure Forecasts.” Last year, $61 billion (£47.2 billion) was spent targeting the populations of these cities, and that’s expected to rise to $69 billion (£53.4 billion) by 2019. New York takes the biggest chunk, with $15 billion (£12 billion) expected to be spent on advertising this year. Tokyo isn’t far behind, with $13 billion (£10 billion) expected; London’s spend is expected to be $8 billion (£6.2 billion.)

Despite uncertainty around how Brexit will affect ad spend, London is predicted to be the second-biggest contributor for growth by 2019, adding $968 million (£629 million) to the global ad market, according to the same forecast.
“People say it’s uncertain times and that budgets are being pulled back, but I don’t buy that,” said Paul Mead, chairman of VCCP Media. “Everyone is trading on a week-by-week, month-by-month basis. They’re not thinking about Brexit and other macro issues. Most of the larger brands know if you turn the tap off from an investment perspective, the revenue is affected.”

TV’s value endures
Digital ad spending has outpaced TV’s in 10 markets to date: Australia, Canada, China, Denmark, Finland, the Netherlands, New Zealand, Norway, Sweden, and the U.K, according to GroupM, which predicts another five markets will cross the line in 2017 (Germany, France, Ireland, Hong Kong and Taiwan.)

And yet, TV is still relatively resilient, holding at 42 percent of ad spend in 2016.
The likelihood of programmatic TV generating real scale remains a distant promise. Likewise, addressable TV, in which data (like home location, purchasing behavior and income) is overlaid to ads that are inserted into linear TV ad slots, also still faces scale challenges.

It’s a mobile world
Digital media’s share of ad spend continues to grow, albeit slowly — it’s projected to reach 33 percent of all media investment in 2017, compared to 30.7 percent in 2016. People are spending more time with media in general — eight hours a day in 2016, according to GroupM. While the average person consumed nine more minutes of media a day in 2016 versus 2015, people spent an extra 14 minutes specifically with online media during the same period because of mobile growth. While platforms like Snapchat aren’t included in the report, they are top of mind.

“Google and Facebook attracted the vast majority of incremental digital ad investment growth in 2016,” said Adam Smith, futures director at Group M. “In 2017, the industry will be watching closely to see how Snapchat or Amazon may creep into Facebook’s and Google’s value chain, and if the stronghold that Baidu, Alibaba, and Tencent has in China can expand to international markets.”

Financial Times se pone serio con los bloqueadores publicitarios

El diario británico ha decidido impedir el acceso a sus contenidos a todos los usuarios registrados que tengan instalado un bloqueador de anuncios en sus dispositivos. Es la solución a la que han llegado tras probar varias fórmulas en julio pasado con un grupo reducido de lectores.

Los medios de comunicación continúan con su batalla por frenar el uso masivo de bloqueadores publicitarios. Aunque los ingresos de ‘Financial Times’ no dependen en exclusiva de los anunciantes (40% proviene de la publicidad y el 60% de suscripciones y eventos), su negocio también se han visto afectado por esta práctica, con hasta un 20% de su tráfico usando un bloqueador (18% en pc y 2% en móviles).

En julio de 2016, el diario británico realizó varios experimentos con una muestra de 15.000 personas para ver cómo podían disuadirlas de utilizar bloqueadores. Durante esas semanas, los usuarios registrados en su web que se conectaban desde un pc podían ver tres tipos de mensajes. Un formato ofrecía un acceso sin restricciones al contenido, acompañado de un mensaje educado en el que se explicaban los problemas que causa el bloqueo de anuncios y pidiendo al usuario la inclusión en la lista blanca del bloqueador. El segundo mensaje permitía el acceso libre pero con textos incompletos, en concreto se eliminaban del artículo un número de palabras que representara el porcentaje de ingresos que ‘FT’ pierde por culpa de los bloqueadores, explicando por qué. La tercera opción era simplemente bloquear el acceso al contenido mientras el usuario siguiera usando el adblocker.

Los resultados fueron los siguientes: cuando se presentaban los textos eliminando palabras, el 46,59% de los usuarios optó por incluir la web de 'Financial Times' en la lista blanca o desactivar sus bloqueadores. Sin embargo, la opción más efectiva resultó ser la más radical: impedir el acceso. Desde 'FT' creen que, a largo plazo, bloquear el acceso al contenido acompañado de una explicación del motivo, conduce a una mayor proporción de usuarios que añaden la web a las listas blancas y leen más artículos. Concretamente, impedir el acceso a los artículos permitió que el 69% de los usuarios añadieran la web a las listas blancas.

A partir de ahora, si los usuarios registrados usan bloqueadores publicitarios, verán un mensaje emergente con un texto que reza lo siguiente: “Entendemos tu decisión de usar un bloqueador. Sin embargo, el periodismo de ‘FT’ requiere financiación tanto de suscripciones como de publicidad. Por favor,añade ‘ft.com’ a la lista blanca de tu bloqueador y luego refresca tu navegador para continuar”. El texto se acompaña de un botón que enlaza un servicio de ayuda de ‘FT’ en el que se explica al usuario cómo añadir al diario a esa lista y cuál es su política de publicidad.

Por el momento, los niveles de compromiso con el contenido, antes y después de que los usuarios agregaran el sitio a una lista blanca, no registraron caídas ni en el tiempo que pasaban interactuando en la web, ni en el número de artículos leídos, informa 'Digiday'.

Los compradores de publicidad estan inquietos respecto a que si las noticias reales van a sobrevivir

Barbara Laker, left, and Wendy Ruderman, winners of the 2010 Pulitzer Prize for investigative reporting, working last week in the combined newsroom of The Inquirer and The Daily News in Philadelphia.CreditElizabeth Robertson
PHILADELPHIA — Hey, America’s Advertisers: You got some good news last week, didn’t you?

Facebook, where you are increasingly placing your advertising, says it will do more to keep live killingsstreaming suicides and terrorist videos off its site.

With any luck the 3,000 new content monitors Facebook says it is hiring will be able to remove those sorts of hand grenades from its news feed before any can roll up next to your ads and blow your public images to kingdom come.
That followed similar news from YouTube, owned by Google, where you are spending even more of your advertising money. It announced it was looking for ways to give advertisers more say over where their ads go, after The Times of London recently discovered an automated system had inadvertently put ads from L’Oréal, Nissan and others into videos featuring the anti-Semitic stylings of a hatemonger whose name I will not publicize here.

The question now is whether all of this will give advertisers the assurance they need to keep sending the overwhelming majority of their new online ad dollars to Google and Facebook.

There is more at stake in the answer than the fortunes of our two online overlords. It will also help determine the fate of the rest of the digital media. And that, in turn, will affect whether cities like this one will be able to maintain a vibrant free press that keeps government honest and voters informed.

So, yeah, America’s Advertisers, I’m talking about democracy, and your role in it. News flash: You have one. Let me explain.

We are still very much in the midst of a fascinating, often exciting but sometimes scary digital transformation in which advertising dollars are moving to Google and Facebook in a hurry.

But as those dollars are moving toward Google and Facebook, they are often moving away from quality news and information providers, starving them of the direct digital revenue they need to pay for fact-based news gathering. Real news costs real money; fake news comes cheap.

So you have best-of-times-worst-of-times weeks like the one that just passed. Facebook announced yet another better-than-expected quarter of earnings, just as Google’s corporate parent, Alphabet, had a few days before.
At the same time, word seeped out about layoffs at local Gannett-owned papers including The Independent Mail of Anderson County, S.C., and The Sun-News of Las Cruces, N.M. The McClatchy-owned Tribune of San Luis Obispo, Calif., also confirmed layoffs, The New Times reported.

Here in Philadelphia, reporters at The Inquirer and The Daily News got an email with instructions on how to go about reapplying for jobs in a reorganized newsroom, the latest chapter in their corporate parent’s mad dash to retool the papers for survival in a world dominated by Google and Facebook.

It’s just the latest bit of upheaval in their joint operation, which is now combined into a single newsroom after The Daily News vacated its own. The former Daily News newsroom is just across a hallway from the Inquirer space here on Market Street, and it still sits tauntingly empty save for an old neon “Daily News: The People Paper” sign, which now sits unlit.

The shifting dynamic is especially hard for old-line newspapers, whose lucrative classified ads were rendered obsolete by the likes of Craigslist, and high-margin print advertising fled to the Web. They had hoped to make up the difference through online advertisements. Then Google and Facebook fired up their cash vacuums.

Their draw for marketers is all too powerful, and understandable. Advertisers were once happy to pay to reach big, broad audiences with the hope of getting their messages in front of the right customers — buying the whole cow just to get the milk, it was called — and they would turn to local newspapers for what once passed for geographical precision. Now they can use Facebook and Google to reach a smaller, more targeted audience, right down to the correct ZIP code.

The new environment is forcing newspapers to scramble to come up with a solution that can keep the lights on, and keep the staff large enough to continue to do real, probing journalism, before it’s too late and it’s all over.
If you want to understand the seriousness of the challenge, check out the Columbia Journalism Review’s local journalism issue, out Monday, which has a map of “America’s Growing News Deserts.”

The Inquirer and Daily News were donated last year to a foundation created by their owner, H.F. Lenfest.CreditAssociated Press

The country’s still-standing metropolitan dailies are increasingly requiring extra help from benevolent rich people. The Philadelphia Media Network — the owner of The Inquirer, The Daily News and Philly.com — has been lucky enough to have a benefactor in H. F. Lenfest. Last year, he donated the group to a foundation he created, the Lenfest Institute for Journalism, which he endowed with $20 million.

Last week, the institute followed up by announcing that it had secured $26.5 million more from donors, and Mr. Lenfest committed an additional $40 million in future matching funds, all with a goal of finding “sustainable business models for high-quality journalism.”
In an interview, I asked Mr. Lenfest a leading question: Does the diminishing of local newspapers mean open season for corrupt city officials?

 “I don’t think our city government is corrupt — but sometimes they need correcting, and that is the responsibility of the newspaper,’’ he said. “There will be a tremendous vacuum if these local newspapers don’t continue to print.”
It’s not hyperbole. I grew up in Philadelphia, reading The Inquirer’s Pulitzer Prize-winning exposés of the Philadelphia court system — by a team including Buzz Bissinger — and of the Philadelphia Police Department K-9 unit by William K. Marimow, now the paper’s editor at large.

Even while fighting today’s economic headwinds, the paper has continued the tradition. Its reporting figured prominently in the case of Kathleen Kane, the former Pennsylvania attorney general and a rising-star Democrat, who was convicted last year on charges of abuse of power and perjury, among others.
But as David Boardman, who sits on the Lenfest Institute board of managers, told me, the group “still needs to make a buck” under the terms of its philanthropic support.

“Ultimately, you can’t continue to contribute to something that is inevitably in decline,” said Mr. Boardman, who is the dean of the Klein College of Media and Communication at Temple University.

So the Philadelphia Media Network is preparing for buyouts. And, like all newspaper groups, it is trying to reorganize its newsroom to provide its journalism in new forms that fit with new media consumption habits.

But that will not solve the threat that Facebook and Google pose to news providers. They have the user base — billions — and engineering prowess that no media company can match, which Stan Wischnowski, the Philadelphia Media Network executive editor, told me, is “putting democracy as a whole at risk because it has put at risk the kind of watchdog journalism we’ve been doing for 187 years.”

Facebook is aware of the problem, seems to care about it, and has been working with local papers and foundations — including the Lenfest Institute and the Knight Foundation — to help “support local news and promote independent media.”
It might, for instance, help efforts to increase subscriptions to newspapers with digital paywalls, while making them more adept online and on social platforms.

That brings me to one note of optimism, and it comes from my newspaper, which put up a paywall a few years ago and last week announced a successful quarterly earnings period. The paper reported 308,000 new digital-only subscriptions — certainly helped by President Trump’s regular, er, references to its journalism — and continued double-digit increases in online advertising revenue.

The company’s chief executive, Mark Thompson, told investors that The Times seemed to be benefiting from advertisers’ desire to avoid being adjacent to “dishonest or tawdry content.”
We can hope that’s the start of something more lasting, and not a temporary move to a safe harbor until the squall passes.

Facebook is working to do its part to help newspapers. Newspapers are trying to modernize. Now it is time for advertisers to do their part to support the people who make the quality content they want to be associated with, and to reconsider their headlong rush away from them.

That isn’t a plea for charity. It’s a plea for common sense. If current trends continue, there will be far less quality content to fill the big platforms advertisers are so in love with. They should think about what will be left.

Sorpresa de la publicidad digital a la televisiva en 15 países

Las marcas cada vez tienen más en cuenta Internet como plataforma ideal para impulsar sus campañas publicitarias. Ya hay diez mercados en los que la inversión digital ha superado a la de la televisión y las previsiones apuntan a que en 2017 se sumarán otros cinco países.

GroupM, líder mundial en gestión de inversiones en medios, ha elaborado un informe que refleja las tendencias del mercado de la publicidad y las compañías periodísticas a escala mundial. En concreto realiza una previsión del crecimiento de la publicidad digital en 46 mercados. Según los datos publicados en ‘Interaction 2017’, la publicidad digital se llevará 77 centavos por cada nuevo dólar publicitario que se invierta este año, mientras que la televisión solo se llevará 17 centavos.

El estudio señala que la publicidad digital sigue creciendo rápidamente a pesar de los desafíos a los que se enfrenta la publicidad en Internet, tales como las formas de medición. Los vendedores siguen a los consumidores a sus destinos: los medios de comunicación donde cada vez están pasando más tiempo. El tiempo dedicado a los medios digitales subió 14 minutos en 2016 gracias a un mayor acceso a través de dispositivos móviles.

En la actualidad, son diez los mercados en los que la inversión digital ya ha superado a la televisión: Australia, Canadá, China, Dinamarca, Finlandia, Países Bajos, Nueva Zelanda, Noruega, Suecia y Reino Unido. Las previsiones de este informe señalan que a lo largo de 2017 se sumarán otros cinco: Francia, Alemania, Irlanda, Hong Kong y Taiwán.

Sin embargo, los datos de GroupM muestran que, por el momento, la televisión sigue siendo la reina de la publicidad cuando se agregan datos globales. La cuota del mercado de la inversión en publicidad televisiva se mantuvo estable en 2016, en torno al 42%. El problema de la televisión tradicional es que los jóvenes de 16 a 24 años se están alejando de ella.