Published on: October 18, 2021
There were a pair of stories this weekend that examined different aspects of Facebook's business model - one having to do with how it targets teens and the other focusing on its inability to filter out hate speech. Both are worth reading, because as they offer a perspective on Facebook's corporate mentality, they also illustrate the degree to which the social media behemoth has a target on its back:
• From the New York Times:
"When Instagram reached one billion users in 2018, Mark Zuckerberg, Facebook’s chief executive, called it 'an amazing success.' The photo-sharing app, which Facebook owns, was widely hailed as a hit with young people and celebrated as a growth engine for the social network.
"But even as Mr. Zuckerberg praised Instagram, the app was privately lamenting the loss of teenage users to other social media platforms as an 'existential threat,' according to a 2018 marketing presentation.
"By last year, the issue had become more urgent, according to internal Instagram documents obtained by The New York Times. 'If we lose the teen foothold in the U.S. we lose the pipeline,' read a strategy memo, from last October, that laid out a marketing plan for this year.
"In the face of that threat, Instagram left little to chance. Starting in 2018, it earmarked almost its entire global annual marketing budget - slated at $390 million this year - to targeting teenagers, largely through digital ads, according to planning documents and people directly involved in the process. Focusing so singularly on a narrow age group is highly unusual, marketers said, though the final spending went beyond teenagers and encompassed their parents and young adults.
"The Instagram documents, which have not previously been reported, reveal the company’s angst and dread as it has wrestled behind the scenes with retaining, engaging and attracting young users. Even as Instagram was heralded as one of Facebook’s crown jewels, it turned to extraordinary spending measures to get the attention of teenagers. It particularly emphasized a category called 'early high school,' which it classified as 13- to 15-year-olds."
You can read this story here.
• From the Wall Street Journal:
"Facebook Inc. executives have long said that artificial intelligence would address the company’s chronic problems keeping what it deems hate speech and excessive violence as well as underage users off its platforms.
"That future is farther away than those executives suggest, according to internal documents reviewed by the Wall Street Journal. Facebook’s AI can’t consistently identify first-person shooting videos, racist rants and even, in one notable episode that puzzled internal researchers for weeks, the difference between cockfighting and car crashes.
"On hate speech, the documents show, Facebook employees have estimated the company removes only a sliver of the posts that violate its rules—a low-single-digit percent, they say. When Facebook’s algorithms aren’t certain enough that content violates the rules to delete it, the platform shows that material to users less often - but the accounts that posted the material go unpunished."
The story goes on: "The documents reviewed by the Journal also show that Facebook two years ago cut the time human reviewers focused on hate-speech complaints from users and made other tweaks that reduced the overall number of complaints. That made the company more dependent on AI enforcement of its rules and inflated the apparent success of the technology in its public statistics.
"According to the documents, those responsible for keeping the platform free from content Facebook deems offensive or dangerous acknowledge that the company is nowhere close to being able to reliably screen it."
This story can be read here.