Multitasking and Cognitive Traps: How Our Minds Work

Multitasking and Cognitive Traps

Cognitive biases refer to systematic errors in our perception and thinking, caused by a variety of factors. This field is now well-studied; just look at the chart in the relevant Wikipedia article to see the breadth of research. Cognitive biases include any phenomenon where the result of a cognitive act is “not consistent with reality.” I put this in quotes intentionally: for example, the Russian psychologist Lev Vygotsky, founder of the cultural-historical approach, emphasized that the psyche is inherently subjective and shaped by the subject’s way of life. Its purpose is to filter and modify reality so that the individual can act more effectively in the world. In other words, cognitive biases are an integral part of our psyche, which, while evolutionarily advanced, sometimes lets us down.

One type of cognitive bias is what everyday psychology calls “thinking traps.” In reality, such traps can occur in perception and memory as well. Children (and not just children) often test each other for susceptibility to the Einstellung effect, a classic thinking trap. Try asking someone three questions in a row: “What do you lay when installing the internet?”, “Where were grades recorded in school in the past?”, “Who killed Cain?” On the third question, your friend may give an answer they never intended to, simply because you set up a trap using one of many possible sources of cognitive bias.

Cognitive Biases

It’s important to distinguish between perception (building an image) and thinking (making judgments, drawing conclusions, and forming generalizations). Daniel Kahneman, who won the Nobel Prize in 2002, and his late colleague Amos Tversky, described the diversity of thinking traps better than anyone else.

Daniel Kahneman is an American psychologist who received the Nobel Prize in Economics “for integrating psychological research into economic science, especially concerning judgment and decision-making under uncertainty.”

In their research on human bounded rationality, Kahneman and Tversky identified a number of so-called thinking heuristics. In psychology and programming, a heuristic is any rule that reduces the search space, speeding up problem-solving but not guaranteeing the correct answer.

In Kahneman’s theory of bounded rationality, heuristics are cognitive biases that should help us adapt efficiently but can actually lead to incorrect conclusions. For example, when we use a minimal context or “frame” as our reference point instead of a broader context, or when we focus only on information available at the moment and don’t seek out richer data. Another example is when we only select information that confirms our viewpoint and ignore anything that challenges it.

These are just a few examples of the many cognitive biases. The confirmation bias was first described by Peter Wason, who tried to understand why even scientists make mistakes in their experiments. He explained that scientific experiments are often designed to confirm a theory, while opportunities to disprove it are ignored, which contradicts the formal logic of science. This doesn’t mean it’s done intentionally; researchers, like everyone else, fall victim to cognitive bias and unconsciously seek favorable data. Another example is the Einstellung effect: if we’re used to solving problems a certain way, we may stick to that method even when a faster, more effective one is available.

How to Avoid Cognitive Biases?

Interesting research has been done on baggage screening for dangerous items. Experiments showed that objects rarely encountered in a person’s visual experience are highly likely to be missed, even if the observer is warned in advance. Experts are no better than novices in this regard. It’s understandable: not every suitcase contains a bomb, but when one does, the chance of missing it is high simply because such objects are rare in the screener’s professional experience.

How can this error be avoided? During training, screeners should be frequently shown “positive samples”—images of the objects that must not be missed. No matter how experienced a person is, they need this kind of training. Overcoming cognitive biases is not easy; after all, they are the product of thousands of years of evolution. They are an important adaptation that was meant to be effective, but in the complexity of modern life, they sometimes lead to mistakes. In some cases, if we understand the cause of an error, we can learn to overcome it—sometimes it’s enough just to be aware of its existence. For example, knowing that everyone tends to select information that confirms their viewpoint, we can design experiments to potentially disprove our own hypotheses. But we can’t always anticipate every error—like the famous “gorilla suit” experiment, some mistakes are unavoidable.

Interestingly, research has found that high-functioning autistic individuals are less prone to this particular error. Even when shown rare objects, they don’t experience the cognitive bias that causes others to miss them. This makes such work a good fit for them, though other cognitive factors must be considered.

Overall, these studies offer real ways to correct cognitive biases. But thinking traps are harder to avoid. Of course, you can systematically doubt, distrust, and double-check everything, but that’s not always practical in real life. Still, knowing about possible sources of bias can help us make better decisions, even at the government level.

Multitasking and Task Switching: Can You Be Julius Caesar?

Scientists suggest that teenagers who have used gadgets since childhood are better at switching between tasks, but have more trouble focusing on just one thing. However, comparing young people to those around 30 is tricky, since ages 20 and 30 are both sides of the well-established peak of cognitive development. After 30, cognitive processes gradually decline. Still, the fact that the next generation will focus differently doesn’t mean it’s bad—they’ll just do it differently. British physiologist Susan Greenfield says that any technology develops some cognitive skills at the expense of others. All adult cognitive processes—attention, memory, thinking—are culturally mediated, shaped by the tools used in a given society at a given time. These tools are already built into the structure of the psyche.

The concept of extended cognition, used by philosopher Andy Clark and others, is relevant here. It suggests that smartphones and other gadgets are now part of the human cognitive system—so much so that you can’t separate them from our thinking. Adults learn to use them deliberately, like learning a second or third language. For the younger generation, digital tools are like a first language, built into their psyche from the start and shaping it in specific ways.

Back in the 1970s, it was proven that multitasking can be trained. Cognitive psychology pioneer Ulric Neisser described an experiment in his book “Cognition and Reality,” where his graduate students Elizabeth Spelke and William Hirst trained two biology students for seventeen weeks to read and comprehend text while simultaneously taking dictation. By the end, they could not only write down words but also categorize them, performing a mental operation while reading and understanding the text. At first, they didn’t even realize what they were writing, but by the end, they could process both tasks and answer questions about the text. So, becoming a “Julius Caesar” is possible, though researchers doubt Caesar himself was truly multitasking—he may have just solved state problems during boring gladiator fights. Modern kids have even more opportunities, so their multitasking skills are indeed better developed.

That doesn’t necessarily mean their concentration is worse. Concentration is best measured by sustained attention—the ability to focus on one thing for a long time. This is somewhat opposed by the need for constant switching. A person may be highly switchable, easily moving from one task to another and back, but also able to maintain long-term focus when needed. The modern generation tends to favor switching—not just the ability, but the necessity or convenience of constantly jumping between information sources to catch as many as possible. If we want to increase the number of sources, we spend less time on each, grabbing the most striking information without going deep. Sustained attention doesn’t suffer; it’s simply abandoned.

Consolidating Results

Ulric Neisser, describing the multitasking experiment, wanted to show that the claim that human attention is very limited is mistaken. The key is how well different actions are coordinated through practice.

Children and teens with gadgets in hand can constantly train their attention every day. Even if teachers scold them for “always being on their phones,” it’s not in their interest to stop—otherwise, they lose their multitasking edge. It’s unlikely that someone who grew up with a smartphone will live their adult life without one, just as someone used to working on a computer is unlikely to switch to a typewriter.

Attentional Blink

There’s a phenomenon called “attentional blink,” which is well-studied in labs. It refers to the persistent inability to detect a second target object if two appear in quick succession at the same spot. For example, if you’re shown a rapid sequence of letters or images and asked to identify two targets (like two gray letters among black ones), you’ll almost always spot the first, but within half a second, you’ll likely miss the second—this is attentional blink. You’re staring right at it, not blinking, but your attention “blinks” and you miss what’s right in front of you.

In one study, two groups were compared: college students and a small sample of WWII veteran snipers, who had been specially trained to shoot at moving targets. Despite the fact that older people generally show more attentional blink, the veteran snipers performed better than the students. Their attentional blink was shorter and less affected their results, even though they hadn’t practiced shooting in years. Clearly, military training in shooting at moving targets has a long-term effect.

There have long been many accessible ways to train working memory, attention, and concentration: exercise books, mobile apps that ask you to remember word sequences, find the odd one out, and so on. These usually have a “transfer effect,” but how long does it last? First, regularity is important; second, the specific skill being trained matters. Cognitive psychologist Daphne Bavelier studies, among other things, the impact of video games on cognition. In her research, participants played arcade games for an hour a day. At the start and end of the experiment, both they and a control group took a battery of standard lab attention tests unrelated to the games: visual search, spatial attention switching, rapid response to signals, tasks with conflicting stimuli, and so on.

After regular training, all measured indicators improved. Hour-long daily sessions had a positive effect on the ability to solve all these types of tasks. In general, any kind of attention training is unlikely to hurt. But a new, not yet well-studied area of cognitive training has emerged. Previously, especially for older adults, crosswords were a regular brain workout, and their positive effects are well-documented. Many studies have shown that people who regularly solve crosswords outperform those who don’t on several measures.

There is little research on the long-term effects of mobile “brain trainers,” mainly because companies that make them are more interested in selling products than in studying their long-term effectiveness. It’s hard to train someone for a while, then take away the trainer and ask them to come back two years later. What if they don’t want to? What if they need ongoing training, not just a limited research period? Usually, participants come in, get a small payment, and are never seen again. To study long-term effects, you need people willing to come to the lab regularly, maybe for years. However, in the past decade, many lab studies have been published on the effects of cognitive training (like working memory training using traditional experimental methods) in both young and older samples, showing that cognitive training effects can be long-lasting. The intervals for retesting vary, but even among older participants, cognitive functioning doesn’t decline months after training.

For younger groups, transfer effects are also common (for example, working memory training later improves performance in other areas, like solving problems with distractions), while for older groups, transfer is more limited. But overall, the data are optimistic about the delayed effects of “brain trainers.” There haven’t yet been serious studies on the effectiveness of special apps and programs, though their creators claim 100% scientific proof and guaranteed success. Still, they’re unlikely to do harm, and transfer effects will likely appear—the only question is to what extent.

One journalist wrote a whole book (also published in Russian) about trying to increase his IQ by honestly using four different programs for a year, including attention and working memory trainers, meditation, and more. His IQ didn’t change. Unfortunately, he didn’t measure his working memory, task-switching, sustained attention, or other characteristics that can be quantified. Nevertheless, Chinese researchers are actively studying these questions: a few years ago, they published a study involving 1,300 participants who completed a large number of attention tasks. The researchers checked how these parameters correlated with IQ and other data. With access to large samples, Chinese scientists may soon publish work on how information technology and especially mobile cognitive trainers affect various cognitive skills.

Leave a Reply