How Facebook (and Others) Manipulate Your Privacy Choices
In 2010, the Electronic Frontier Foundation had had enough of Facebook’s pushy interface, according to Wired. The company was increasingly pressuring people to give up their privacy. But what do you call this kind of pressure? Zuckermining? Facebaiting? Zuckerpunch? The term that stuck was Privacy Zuckering—meaning you’re tricked into sharing more about yourself publicly than you originally intended.
Over the past decade, Facebook has weathered plenty of scandals and seen that people are concerned about these manipulations. Just last year, the company paid a $5 billion fine for “deceiving users about their ability to control the privacy of their personal data.” Yet researchers have found that Privacy Zuckering and other shady tactics are alive and well online, especially on social networks, where privacy controls are more confusing than anywhere else.
Back in 2010, Facebook used a trick where users could “opt out” of partner sites collecting their public information from the social network. Anyone who tried to opt out saw a pop-up: “Are you sure? Instant personalization will give you more features when browsing the web.” Until recently, Facebook also warned people against turning off facial recognition: “If you turn off face recognition, we won’t be able to use this technology if a stranger uses your photo to impersonate you.” The button to enable the setting is bright blue, while the disable button is gray and less eye-catching.
What Are Dark Patterns?
Researchers call these design and language choices dark patterns—tactics that try to manipulate your decisions. Instagram nags you to “please turn on notifications” and doesn’t offer a way to decline? That’s a dark pattern. LinkedIn shows you part of a message in an email but forces you to visit the platform to read more? That’s one too. Facebook redirects you to “log out” when you try to deactivate your account? Again, a dark pattern.
Dark patterns are everywhere online, nudging people to subscribe to information or services, or to buy products. Colin Gray, a human-computer interaction researcher at Purdue University, has been studying dark patterns since 2015. He and his team have identified five main types:
- Nagging
- Obstruction
- Hiding
- Interface interference
- Forced action
These tactics aren’t limited to social networks. They’ve spread across the internet, especially since the introduction of the General Data Protection Regulation (GDPR). Since GDPR took effect in 2018, sites are required to ask for consent to collect certain types of data. But some banners simply ask you to accept the privacy policy—with no way to say “no.” “Some studies have shown that over 70% of consent banners in the EU have some kind of dark pattern built in,” says Gray.
The Legal and Ethical Challenge
Last year, U.S. Senators Mark Warner and Deb Fischer introduced a bill to ban such “manipulative user interfaces.” The problem is, it’s very hard to define a dark pattern. “Any design has a certain degree of persuasion,” says Victor Yocco, author of Design for the Mind: Seven Psychological Principles of Persuasive Design.
By definition, design encourages you to use a product in a certain way, which isn’t necessarily bad. It becomes a problem when the design is meant to deceive. Gray has also struggled to distinguish between dark patterns and simply bad design. He even created a framework to define the latter. Bad design removes user choice and nudges you toward decisions that benefit the company, not you. Designers use strategies like distorting information, bargaining, and duplicity (like an ad blocker that itself contains ads).
Gray gives the example of the smartphone app Trivia Crack, which forces users to play a different game every two or three hours. Such spammy notifications have been used by social networks for years to trigger the kind of FOMO (fear of missing out) that keeps you hooked. “We know that if we give people things like swiping or status updates, they’re more likely to come back,” says Yocco. “This can also lead to compulsive behavior.”
Leaving Social Platforms Isn’t Easy
The darkest scenarios arise when people try to leave these platforms altogether. Try deactivating your Instagram account, and you’ll find it’s extremely difficult. First, you can’t even do it in the app. On the desktop site, the setting is hidden inside “Edit Profile” and comes with a series of questions: “Why are you disabling your account? Too distracting? Try turning off notifications here. Just need a break? Log out instead of leaving entirely.”
“This creates obstacles in your way, making it harder to go through with it,” says Nathalie Nahai, author of Webs of Influence: The Psychology of Online Persuasion. Years ago, when she deleted her Facebook account, she found a similar set of manipulative tactics. The social network showed her photos of some of her close friends. “They use language that I think is coercive,” Nahai says. “It makes it psychologically painful to leave.”
Worse still, Gray says, research shows that most people don’t even realize they’re being manipulated.
But according to one study, “when people were warned in advance about manipulations, twice as many users could recognize dark patterns.” At least there’s some hope that greater awareness can help users regain some control.