Social Engineering: The #1 Cyber Threat and How to Fight It

Social Engineering: The #1 Cyber Threat and How to Fight It

Many companies build their cybersecurity primarily around technical attack vectors. Such systems may appear mature and reliable, but they often remain vulnerable to one of the most dangerous threats: social engineering, which is based on manipulating human psychology. Statistics show that social engineering is used in 97% of targeted attacks today, often with little or no use of technical methods. Gartner ranks social engineering methods among the top information security threats, and some researchers claim that if social engineering leverages machine learning and artificial intelligence, it could become a threat on par with global warming or nuclear weapons. In this article, we’ll explore the scale of the problem and share our systematic approach to addressing it.

4 Myths About Social Engineering

  • Myth: Social engineering is a limited set of techniques. It’s commonly believed that social engineering is limited to phishing, dropping infected USB drives, social media scams, and phone fraud. In reality, it involves an almost endless combination of technical and non-technical tactics forming complex strategies.
  • Myth: Social engineering is just a part of cyberattacks. In fact, a cyberattack can be just one part of a broader strategy, with social engineering as the main component.
  • Myth: You can encounter social engineering by accident. Social engineering is always targeted. In simple cases, the focus is on an organization; in more advanced cases, it targets specific individuals.
  • Myth: Social engineering only works due to low security awareness or immature security systems. Actually, social engineering operates based on the attacker’s knowledge of the target’s security awareness and system maturity, which is why attacks always start with reconnaissance. This is often the most time-consuming and labor-intensive stage for a social engineer.

Social Engineering: The Limits of Possibility

Advanced social engineering involves sophisticated strategies carried out by professional teams of scammers and technical specialists. To understand their capabilities, it’s useful to study not only well-known cyber incidents involving human factor exploitation but also the biographies of the most notorious social engineers (including professional con artists) of the past century.

  • Victor Lustig sold the Eiffel Tower multiple times to unsuspecting buyers.
  • Joseph Weil, known as the “Yellow Kid,” made $400,000 by creating a fake bank in the location of a legitimate one.
  • Frank Abagnale forged checks worth $2.5 million in 26 countries over five years; his story inspired the movie “Catch Me If You Can.”
  • The Badir brothers, blind from birth, executed major social engineering scams in Israel in the 1990s, using their heightened hearing and voice mimicry skills.
  • Kevin Mitnick, legendary hacker and cybersecurity expert, authored several books on social engineering and hacking, and was a master of phone-based social engineering techniques.

How does social engineering work? Each strategy involves several stages, and the more skilled the attacker, the less they rely on scripted sequences. The lifecycle and framework of such attacks can be visualized as follows:

  • Lifecycle of Social Engineering
  • Social Engineering Framework

Each stage contains potentially endless combinations of non-technical measures: initiation techniques, initial processing, pretexts, information extraction, influence, deception, manipulation, and NLP (neuro-linguistic programming).

Attacks on the Subconscious

It’s crucial to understand the limits of protection against advanced social engineering attacks, especially those targeting the subconscious. If we consider social engineering as one of humanity’s top three threats, the main concern is the “hacking” of the human subconscious, where attackers influence not the conscious mind, but the subconscious. Some of the aforementioned social engineers may not have realized they were often affecting the subconscious. Our cognitive processes and computational power are concentrated in the subconscious (estimated at 95-99.99%), making the problem vast. While the conscious mind can block attempts to influence it, the subconscious does not resist, which limits the effectiveness of security awareness that operates mainly at the conscious level. The most advanced social engineering techniques are based on activation systems that lead the victim’s subconscious to the attacker’s desired decision.

Everyday examples abound. Everyone experiences subconscious influence at least once a week, such as when shopping at a supermarket. Product placement, packaging shapes, and images are all techniques used by marketers to persuade buyers. For example, a brightly colored product with sharp-edged logos may not appeal to the subconscious but draws attention to a shelf, leading the shopper to notice more appealing products nearby. These are subconscious reactions, which I call “brain vulnerabilities,” though they are normal, evolutionarily developed responses. It’s virtually impossible to consciously anticipate all of them in modern life.

Interestingly, the number of potential “vulnerabilities” in the human brain is orders of magnitude higher than in any advanced software. This is a rough estimate based on the number of possible brain states and memory size compared to average computer software. It’s safe to say that protecting the mind and subconscious is far behind traditional cybersecurity.

Searching for a Solution

Unable to find a ready-made, systematic answer to combating the #1 information security threat, we conducted our own research to develop a systematic approach against social engineering. We started with best practices: the MITRE ATT&CK matrices, NIST, and SANS standards. We focused on detection and incident response methods, and among preventive measures, we emphasized security awareness.

Benefits and Challenges of MITRE Matrices

MITRE matrices break down attacker tactics into numerous techniques, categorized by attack stages. This allows for optimal detection scenarios tailored to specific threat models.

What are the challenges with social engineering? In the PRE-ATT&CK matrix, about 40% of techniques relate to social engineering, but it focuses on attack preparation stages, where it’s nearly impossible to predict or detect intent. HUMINT (Human Intelligence) techniques exist, but these are more relevant to intelligence agencies. Darknet and hacker resource analysis can help, but these approaches are unreliable and often random.

In the main MITRE matrices, such as Enterprise ATT&CK, we deal with technical attack vectors (social engineering appears in Initial Access, e.g., baiting attacks with removable media). Skilled attackers will try to bypass detection techniques they know about using social engineering until the risk of detection is minimal.

Key Takeaways for Detection

MITRE focuses on compromising specific assets (operating systems) in targeted threats. Therefore, some scenarios outside MITRE should address other infrastructure threats. Considering these detection challenges, our approach must:

  1. Prevent or minimize the chance of bypassing technical threat detection measures.
  2. Maximize the implementation of direct and indirect detection scenarios for both technical and non-technical social engineering techniques.

How to Improve Security Awareness

Security awareness remains the main preventive technique against social engineering. The SANS Security Awareness Maturity Model is a good benchmark; at least level three should be targeted. Anything below that means the infrastructure and information systems are fully vulnerable, regardless of the organization’s cybersecurity measures.

Our Approach

We based our approach on a three-level maturity model for monitoring and incident response centers (aligned with the five CMM levels from Carnegie Mellon University): ad-hoc (emerging), maturing, and strategic. In our experience, most security systems in Russia and abroad are at the first level. Therefore, we usually start from there, methodically increasing maturity across all components.

Know Your Attackers

We mapped attacker logic and motives to the SOC/CSIRT maturity matrix. Using non-technical methods carries a high risk of de-anonymization for attackers (e.g., being caught on surveillance cameras during physical intrusions). Therefore, attackers often prefer technical vectors. However, breaking into mature systems purely by technical means is difficult and expensive (zero-day exploits can cost hundreds of thousands or even millions of dollars). Attackers may accept higher risks and use social engineering to simplify and cheapen the attack. Our goal is to “ground” the attacker at the technical level or at least prevent non-technical techniques, increasing the chances of detection. How? By raising security awareness.

Each organization has its own typical attacker profile, so another key component of our approach is risk profiling potential attackers.

A Systematic Approach Against Social Engineering

Our research resulted in a methodical approach: systematically increasing the maturity of security awareness, monitoring, and incident response processes, and implementing specialized detection scenarios for social engineering techniques.

How do we do this? For each client, we study the attacker profile. Based on this, we develop ways to improve detection coverage by increasing data collection points, expanding the scenario set (number of identifiable threats), and enriching scenarios with context. Context is divided into three areas: users, assets, and protected data. All this is done in parallel with improving response capabilities and security awareness.

Example: Event Sources

At the first stage, we mainly collect static data and, to a lesser extent, behavioral data. Some behavioral mechanisms should be added immediately to catch attackers attempting complex targeted attacks who try to bypass all detection techniques and reach the final stages of compromise. For example, a DLP system can serve as a good compensating control. However, many behavioral mechanisms should be implemented at higher maturity levels to avoid excessive false positives, which would reduce overall effectiveness at lower maturity levels. As maturity increases, we expand the number of data sources: system events, real-time security events, signature, and behavioral data.

Example: Asset Context

Initially, we have a basic set of asset data, then we seek additional information to expand it, aiming for full coverage. Why not include everything at once? Without corresponding process development, this would lead to excessive false positives and resource waste, ultimately reducing overall effectiveness.

Monitoring process maturity increases cyclically: we study the attacker and threat surface, build a scenario base, enrich it with context, write correlation rules, and repeat the cycle.

Focusing on Social Engineering Markers

An important aspect of our approach is an additional focus on social engineering markers when developing scenario logic. These can be direct scenarios for technical social engineering vectors or indirect ones for detecting signs of non-technical social engineering. Direct examples include phishing, watering hole attacks, domain typosquatting, whaling (targeted attacks on executives), baiting (infected USB drives), piggybacking (unauthorized entry with an employee), and SMiShing (malicious SMS links). Indirect markers include various DLP, UBA, and TBA scenarios, which help detect non-technical vectors.

For example, with dropped USB drives, you can use infrastructure capabilities, conduct Windows audits, and monitor attempts to launch system processes from removable media (Microsoft Windows events ID 4663 and 4688). The same applies to Linux systems. Whenever possible, this should be done through system auditing to avoid increasing monitoring costs; specialized solutions are a last resort.

Understanding social engineering markers is important not only for monitoring but also for response, post-incident analysis, and forensics. It’s also beneficial to involve a security specialist focused on social engineering. In our experience at Jet CSIRT, collaboration between blue teams (defenders) and red teams (ethical hackers) significantly increases the security team’s chances of countering attackers who exploit the human factor.

Author: Alexey Malnev, Head of the Jet CSIRT Security Monitoring and Incident Response Center

Leave a Reply