How Social Media Algorithms Affect Human Behavior

How Social Media Algorithms Affect Human Behavior

Social media has become so deeply integrated into our lives that it’s hard to imagine a world without it. However, previous research has revealed a range of negative effects of social networks on both physical and psychological health. For example, Instagram (banned in Russia) is considered one of the most harmful social networks, and Twitter (now “X”), acquired by Elon Musk, has often been used in scientific studies to examine users’ mental states. Recently, researchers have turned their attention to social media algorithms and our daily interactions with them. New findings show that being online can negatively impact how we learn and communicate with each other in the real world. This is largely because algorithms partially determine which messages, people, and ideas users see on social networks.

How Do Social Media Algorithms Work?

In the past, people used social networks mainly to communicate and share photos. Today, both old and new platforms attract more brands for advertising and audience growth. The massive number of users creates a need to organize information, and that’s where algorithms come in.

An algorithm is a mathematical set of rules that determines how data is delivered. On social media, algorithms help organize search results and advertisements. For example, Facebook (banned in Russia) uses an algorithm to display pages and content in a specific order.

While many algorithms don’t have clear instructions, advertisers know enough to attract audiences and achieve success. All it takes is posting relevant, high-quality content and actively engaging with followers. Since algorithms help sort information in users’ feeds, social networks set their own priorities for content. Marketers love algorithms, but they are far from perfect. The main goal of an algorithm is to filter out irrelevant or low-quality content, but in practice, this means users might not see certain posts if they don’t meet the platform’s criteria.

A recent analysis of YouTube, for example, showed that 64% of users watched videos containing false or misleading information due to how algorithms work. As a result, online platforms regularly adjust their algorithms.

The Impact of Algorithms on User Behavior

Since algorithms control the flow of information on social networks, they determine what content each user sees. These algorithms also help keep audiences engaged—encouraging them to click on external links and return to the platform. Recently, a team of scientists from Northwestern University found evidence that a side effect of constant algorithm use is that users tend to draw conclusions based on their own experiences.

The authors of a study published in Trends in Cognitive Sciences refer to emotional and in-group information noticed by users through algorithms as “primary” information. In our evolutionary past, learning from primary information was extremely useful. Success in learning was thought to depend on following an authoritative teacher, and paying attention to people who broke moral or social norms encouraged cooperation, according to the study.

But what happens when “primary” information is amplified by algorithms, and some people use this to promote themselves on social media? Does prestige become a poor indicator of success, and do news feeds become overloaded with negative information?

The answer to both questions is yes, which, according to the study’s authors, does not promote cooperation between people (either online or offline). “Human psychology and algorithmic amplification create problems because social media algorithms are designed to increase engagement, not cooperation between people,” the experts note.

Why Does This Matter?

The dysfunction identified in the study arises because people start to form inaccurate perceptions of their environment. Recent research has shown that when algorithms selectively amplify more extreme political narratives, people begin to believe that their political opponents dislike each other more than they actually do.

This “false polarization” can become a significant source of even greater political conflict, writes one of the study’s co-authors in an article for The Conversation.

What’s Next?

There are still very few studies on this topic, but new research is increasingly examining the key components of social learning mediated by algorithms. Some studies have shown that social media algorithms clearly amplify primary information. Still, to fully assess the impact of algorithms on our ability to learn and cooperate, more research is needed. At the same time, most of the necessary data is held by the companies themselves, which are not always willing to share it, citing user privacy and ethical concerns.

The key question, according to the new study’s authors, is what can be done to ensure that algorithms promote successful social learning among users, rather than the opposite. Currently, scientists at Northwestern University are working on developing algorithms that increase engagement while also limiting access to primary information. According to the researchers, such algorithms could help maintain user activity and improve how people perceive each other socially.

Leave a Reply