search menu icon-carat-right cmu-wordmark

Unintentional Insider Threat and Social Engineering

Headshot of David Mundie
PUBLISHED IN
CITE

Social engineering involves the manipulation of individuals to get them to unwittingly perform actions that cause harm or increase the probability of causing future harm, which we call "unintentional insider threat." This blog post highlights recent research that aims to add to the body of knowledge about the factors that lead to unintentional insider threat (UIT) and about how organizations in industry and government can protect themselves.

This research is part of an ongoing body of work on social engineering and UIT conducted by the CERT Insider Threat Center at the Carnegie Mellon University Software Engineering Institute.

UIT is becoming increasingly common. For example, about a year ago, spear phishers from China infiltrated the New York Times website in hopes of gaining access to names and sources that Times reporters had used in a story. A year earlier, Google pulled more than 22 malicious Android apps from the market after they were found to be infected with malware. This year, security blogger Brian Krebs reported that "The breach at Target Corp. that exposed credit card and personal data on more than 110 million consumers appears to have begun with a malware-laced email phishing attack sent to employees at an HVAC firm that did business with the nationwide retailer, according to sources close to the investigation." The Target breach spear phishing attack is an example of social engineering and illustrates how UIT can cause harm to an organization.

Foundations of Our Work

Insider threat remains a major concern among computer and organizational security professionals, more than 40 per cent of whom report that their greatest concern is employees accidentally jeopardizing security through data leaks and or similar errors. This finding led to our initial research into the field of UIT and the publication of the report, Unintentional Insider Threats: A Foundational Study. In that report, which seeks to understand causes and contributing factors in UITs, we developed the following operational definition:

An unintentional insider threat is (1) a current or former employee, contractor, or business partner (2) who has or had authorized access to an organization's network system, or data and who, (3) through action or inaction without malicious intent, (4) unwittingly causes harm or substantially increases the probability of future serious harm to the confidentiality, integrity, or availability

As the examples above illustrate, the impact of UIT can be devastating, even though it is typically the result of actions taken by a non-malicious insider. Our initial work in this field led us to conduct a second phase of research that took a deeper dive into social engineering, specifically the psychological aspects of social engineering exploits.

While technical solutions may be useful on the edges, at its core UIT is a human problem that requires human solutions. Unfortunately, organizations are often loathe to report insider incidents out of fear that the news could damage their reputation or value. A very limited amount of information is publically available through lawsuit records. We also examined news articles, journal publications, and other sources, including blogs, to compile information and identify contributing factors to UIT and social engineering.

Through our analysis, we have compiled information on 28 cases that is now housed in our UIT social engineering database.

Contributing Factors in Social Engineering Vulnerability

In the course of our research, we identified several factors that made individuals more susceptible to attack. Although our sample did not allow us to draw any conclusions on demographic factors, such as gender or age, we were able to identify several organizational and human factors. The organizational factors that we identified in our report are as follows:

  • Security systems, policies, and practices. Many of the cases that we examined provided insight into organizational policies and procedures. Some cases indicated that the victims violated those policies, but most incident summaries do not provide sufficient information to determine whether those factors are involved.
  • Management and management systems. Many of the cases reveal that simple authentication credentials provide attackers with access to internal emails, company data, and entire computer networks. In one case that we examined, an attacker gained direct network access from a username-password combination and did not need to place malware or execute any other indirect attack to cause damage. Organizations must regularly perform extensive security audits to determine how best to improve internal controls; they cannot rely on security established during initial installation of a system.
  • Job pressure. Certain industries, such as news services, place a premium on obtaining and distributing information as quickly as possible. Employees in these types of organizations may be more susceptible to outside influence from social engineering due to this pressure.


The human factors that we identified are as follows:

  • Attention. In at least one of the cases we examined, we identified fatigue as a contributing factor. In that case, a phishing message was received late at night, and the individual responded before taking the time to analyze the message. The attacker may have information about work hours that could be used as part of an organized attack.
  • Knowledge and memory. Several cases that we examined indicated that even when employees have been trained, a large percentage will still respond to phishing attacks. It is therefore important that organizations offer constant refreshers or other means to maintain employee knowledge and keep it fresh in their minds.
  • Reasoning and judgment. In some cases, an employee's safeguards were lowered, perhaps in response to the realistic nature of a phishing message and/or the pretext created through reverse social engineering (e.g., offers to organizations or employees to assist in preventing or addressing outside attacks, in solving bank account problems, or in supporting system operations).
  • Stress and anxiety. In one case, the victim knew that the organization and its customers were receiving phishing emails. This knowledge may have increased his desire to accept an offer of mitigation that appeared to be legitimate, but in actuality was just another phishing attack.

I would like to stress that we are not breaking new ground with this publication. Our intent was to add meaningful input to the ongoing discussion on how social engineering relates to the body of research on insider threat and what organizations, specifically federal agencies, can do to mitigate contributing factors. Social engineering is a key component of UIT in that many non-malicious insiders are susceptible to social engineering, and thus become a threat to their organizations.

An example of the impact of social engineering is the "Robin Sage" case where a cyber security analyst and "white hat hacker" contacted security specialists, military personnel, staff at intelligence agencies and defense contractors through bogus accounts that had been established on social networking sites such as Facebook, Twitter, and LinkedIn. The recipients of these communications ended up exposing far more information than their organization or its business partners would have wanted released in the public domain. Other examples similar to this have been made public since the "Robin Sage" study.

Best Practices for Organizations

As we stated in our report, organizations face many challenges in countering UIT social engineering threats, including balancing operational goals with security goals to remain competitive. To stay ahead, or at least keep up with phishers and spear phishers, we suggest the following best practices based on our analysis:

  • Training. Organizations must continue to develop and deploy effective training and awareness programs so that staff members are aware of social engineering scams and can identify deceptive practices and phishing cues. Training plans should also teach effective coping and incident management behaviors to respond to social engineering.
  • Minimize stress. When employees are stressed and working fast, they tend to be more susceptible to social engineering attempts. Organizational leaders need to examine whether they are creating a stressful environment or one that fosters a natural workflow. For example, one aspect of a plan to minimize stress could involve allocating time for employees to fulfill information security compliance requirements.
  • Encourage employees to monitor and limit information posted on networking sites. For example, LinkedIn members often post details about their career history, including past cities where they have lived and worked. Phishers and spear phishers often contact individuals based on the information posted on such sites. They advertise false jobs and ask recipients to send a writing sample, building a sense of trust.

A person seeking a job or a networking opportunity should be trained to avoid posting unnecessary details on social network sites. Moreover, job seekers should not operate in a vacuum. In particular, they should seek the input of a co-worker or friend to review an email inquiry to assess whether it appears legitimate.

One technique for detecting unintended disclosure of information on social networking sites is to put a piece of false information on each social media site the individual uses. For example, a user could list an alternate city or alternate dates of employment on separate sites, so that a social engineering attempt based on information from that site can be detected easily. If someone contacts the individual referencing the false information, the individual would know that this is a social engineering attempt, rather than a legitimate contact.

A lot of the best practices listed above are similar to those that our team recommends for intentional insider threat. These include training to heighten awareness and reduce human error, management practices to reduce likelihood of human error, e-mail safeguards that include anti-phishing, and anti-malware, antivirus protection, data encryption on storage devices, password protection, wireless and Bluetooth safeguards, remote memory wipe for lost equipment, and attention to what is posted on social media sites. While not all best practices listed above have been validated in our report, they are strategies that we have found to be successful.

Looking Ahead

Our research on UIT to date has been sponsored by the Department of Homeland Security. In the next phase of our work, we plan to examine UIT in the context of the 14 sectors of the economy identified by the DHS. For example, we will examine if phishing attacks differ based on the sector of the economy where they are executed.

One challenge that we continue to face is the lack of verifiable information regarding social engineering and UIT. It would be ideal if we could set up an information sharing system where organizations could share information about unintentional insider threats without feeling as if their security or reputation were being compromised.

As we stated earlier, socially engineered attacks that result in UIT are very much a human problem. While technical solutions may be useful, further research is needed to identify and mitigate the organizational and human factors of UIT social engineering. We welcome your feedback on our work. Please leave feedback in the comments section below.

If you have experienced an UIT, please let CERT know (also by leaving feedback in the comments section). We are looking to increase the number of cases in our database, and greatly appreciate any help we receive. All your information will be kept strictly confidential.

Additional Resources

To read the SEI technical report, Unintentional Insider Threats: Social Engineering, please visit
https://resources.sei.cmu.edu/library/asset-view.cfm?assetID=77455

To read the SEI technical report, Unintentional Insider Threats: A Foundational Study, please visit https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=58744

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed