Facebook has been accused of putting the desire for ad revenue above the need to keep its users safe, and putting “growth over safety”. These latest claims from the Facebook whistleblower behind The Wall Street Journal’s Facebook Files series were aired on CBS News’s 60 Minutes show last night, but will have repercussions for the future of brand relationships with the social network.
Frances Haugen, who was employed at Facebook from 2019 on its misinformation team, stated that the company is aware of and capable of doing more to curb disinformation on its platforms, but chooses not to do so in service of engagement and advertising revenue.
She said: “Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, and [Facebook] will make less money.”
While previous reports have hinted at it, this is the first time that advertising money in particular has been brought into stark focus as the reason behind Facebook’s decisions revealed by the leaks.
For the full year of 2020 Facebook reported that the lion’s share of its revenue came from advertising. It reported full year ad revenue of $84,169m – up 21% on the previous year. That growth figure might be perceived more negatively in the wake of Haugen’s statements, as her central argument is that it came at the expense of user safety.
Haugen cites the temporary safety measures Facebook took during the latest US election as evidence that it simply chooses not to prioritize user safety. She also spoke about the Capitol Hill riots in January, stating that through its inaction and lack of consideration around safety concerns, Facebook helped fuel the violence.
She said: “There were conflicts of interest between what was good for the public and what was good for Facebook. Facebook over and over again chose to optimize for its own interests, like making more money.”
It is not the first time that Facebook’s changes to its algorithm to favor engagement have created controversy. In 2018 the social network began deprioritizing content from publishers and businesses in order to curb what was termed the ‘context collapse’ of its newsfeed. Instead, it promoted more content and updates from a users’ network of connections – and has subsequently come under fire for not properly vetting that information for the potential to cause harm.
Haugen said: “It’s one of these unfortunate consequences, right? No one at Facebook is malevolent, but the incentives are misaligned, right? Like, Facebook makes more money when you consume more content. People enjoy engaging with things that elicit an emotional reaction. And the more anger that they get exposed to, the more they interact and the more they consume.”
The social giant – which has been launching a number of new advertising products in recent years aimed at capturing local ad spend – has also been accused of hiding internal research which demonstrated its Instagram platform had caused harm to teenage girls.
Haugen says that many Facebook employees internally had warned of the issues prior to the Capitol Hill riots and were ignored. This tracks with a number of the revelations from earlier reports in The WSJ, which made Facebook’s ongoing knowledge of its issues the central tenet of its argument.
On one internal message board – copied by Haugen – one employee notes that Facebook’s leadership had been watering down responses to the bigger issues: “Welcome to Facebook! I see you just joined in November 2020 ... we have been watching ... wishy-washy actions of company leadership for years now.”
The fight goes on
Facebook has been accused of effectively being beyond the regulation of any one country, given its size and international reach. Haugen believes that Facebook’s lack of any oversight and its apparent inaction on a number of major issues is pushing countries – particularly in Europe – into more extreme measures in dealing with the social network. Perhaps not coincidentally, Haugen is set to give evidence to the Joint Committee on the Online Safety Bill in the UK later this month.
<blockquote class="twitter-tweet" data-partner="tweetdeck"><p lang="en" dir="ltr">Important interview with whistleblower Frances Haugen for ?<a href="https://twitter.com/60Minutes?ref_src=twsrc%5Etfw">@60Minutes</a>? highlighting how Facebook puts profits before harm. This month she will also give evidence about this to the Joint Committee on the Online Safety Bill ?<a href="https://twitter.com/OnlineSafetyCom?ref_src=twsrc%5Etfw">@OnlineSafetyCom</a>? <a href="https://t.co/HXqg5xdWZu">https://t.co/HXqg5xdWZu</a></p>— Damian Collins (@DamianCollins) <a href="https://twitter.com/DamianCollins/status/1444935101419294724?ref_src=twsrc%5Etfw">October 4, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
As with much of the rest of the leaks, Facebook has pushed back. It provided a statement to 60 Minutes stating: “Every day our teams have to balance protecting the right of billions of people to express themselves openly with the need to keep our platform a safe and positive place. We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true.
“If any research had identified an exact solution to these complex challenges, the tech industry, governments and society would have solved them a long time ago.”
The role of brands
The possibility of brands inadvertently funding hateful content or disinformation has been brought to the fore this year. Last week Joshua Lowcock, global chief brand safety officer at Mediabrands network agency UM Worldwide, said: “While some platforms have policies on disinformation and misinformation, they are often vague or inconsistent, opening the door to bad actors exploiting platforms in a way that causes real-world harm to society and brands.”
Haugen’s claims, however, suggest that Facebook itself is one of the bad actors by refusing to curb the potential harm of disinformation on its platforms. As with the previous recommendations in IPG Mediagroup’s report into the impact of brands appearing opposite harmful content, the answer might lie in collective action on behalf of brands.
Harrison Boys, the director of standards and investment product EMEA at Magna and author of the report, said: “Marketers are right to be concerned when they find their advertising near misleading content as, unchecked, it could harm their reputations and the communities they serve. The industry, which joined forces against online hate speech and supported online privacy, needs to take a stand against misinformation and disinformation today.”
Collective action on the scale required to make a difference might be hard to coordinate, particularly given the dominance Facebook and its products have on the online advertising ecosystem. In the wake of Haugen’s statements, however, the pressure is on brands to ensure that by advertising on Facebook they do not become complicit in the spread of harmful disinformation.