icon-carat-right menu search cmu-wordmark

Highlights from the 7th Annual National Insider Threat Center (NITC) Symposium, Day One

Daniel Costa Sarah Miller
PUBLISHED IN
Insider Threat
CITE

This year's seventh annual National Insider Threat Symposium, "From Mitigating Insider Threats to Managing Insider Risk," focuses on proactive approaches to reducing the impact and likelihood of insider incidents within organizations. The two-day symposium, which had its first session on September 10 and continues on September 24, gathers recognized leaders in insider threat research, as well as leading-edge insider threat program practitioners in both government and industry, to discuss the latest challenges and best practices in insider risk management. In this blog post, we will recap the first day's presentations and Q&A sessions, answer some of the questions, and look ahead to the second and final day of the symposium.

Keynote Presentation: From Mitigating Insider Threats to Managing Insider Risk, Q&A

Sarah Miller, an insider threat researcher in the CERT National Insider Threat Center (NITC), and I (Dan Costa, deputy director of the CERT NITC) opened the symposium with a keynote presentation on the challenges and benefits of expanding an insider threat program from mitigating threats to actively managing their risk. Following the presentation, symposium attendees had some questions on particular concepts and practices in insider risk. We've provided some answers below.

Q: What might it mean to get "risk down to zero" for insider threat? Is there a difference between enterprise risk management definitions/assumptions and those of insider threat?

You cannot bring risk down to zero for insider threats because there is a human in the loop. The nature of insider threats is authorized access. There is always going to be a potential for misuse of assets. The best we can do is bring the risk down to an acceptable level for specific scenarios. Not all insiders are created equal in terms of what they can do (i.e., means), what they have access to (i.e., opportunity), or motivation to do harm to the organization. For a given set of threat actors, and based on current defenses, what do we have in place that can help us prevent, detect, and respond to insider threats? How likely is it that someone can bypass those defenses, circumvent or evade detection, and cause harm to our organization? That is a real challenge for an organization: to make the measurements that allow them to say with some relative degree of certainty what the likelihood is that one of their insider threat actors will exploit a vulnerability in their organization. Methodologies such as red teaming, tabletop exercises, and penetration testing can be performed by insider threat programs to inform threat likelihood measurements based on current defenses.

Within different threat actor classes, the means and motive will differ, specifically for insider versus external actors. The difference comes down to the threat actor's knowledge and understanding of what your critical assets are and how they are protected. When we think about internal and external threat actors, we can use the measurements of likelihood and impact. You are going to have different levels of access and understanding of your critical assets, what they are, and who they might be of value or benefit to, so you will have different calculations for the likelihood and impact of access misuse. Insiders are in a position to make the most out of a compromised asset because they know it well. Misuse may include degrading that asset's confidentiality, integrity, or availability.

Unintentional insider threats do not carry the same intent to degrade an asset's confidentiality, integrity, or availability. While some evidence suggests that the prevalence of unintentional insider threats may be higher than that of malicious insider threats, the impact may not always be as high because of the lack of strategic misuse of the asset. Additionally, malicious actors often conceal their crimes, which can cause a delay in discovery and response to the incident. With unintentional insider threats, the lack of concealment may put organizations in a better position to contain and correct the misuse of an asset.

There is always some percentage of threats and events that you cannot be prepared for or anticipate. However, administrative controls like policies and training can help to mitigate the impact of those threats.

Q: How are programs considering the threats associated with conspiracy groups?

Organizations are considering this threat on two primary dimensions:

  1. How can any of these groups co-opt my employees into misusing their access to my assets to cause harm to my organization? If my company has information that one of these groups might be trying to use to satisfy some specific objectives, what are they?
  2. What is the impact associated with one of my employees being co-opted by any of these groups?

Organizations are thinking about things like this from the perspective of, How can I help my workforce understand that their access to our critical data is a thing that is targeted by people outside of our organization, for a variety of different purposes (e.g., monetary or political)?

We can arm employees with an understanding of their role and responsibility as an employee of the organization: to protect their authorized access. This topic is a really good thing to work into insider threat awareness training. You want to shift away from training that is framed only as, Don't do bad stuff because we're going to catch you and throw you out of the organization. Instead, training can be framed as, Because you have access to our crown jewels, you may be targeted for that access. Here's how you can help us help you keep that access protected.

Q: Do we use a zero-trust model when quantitatively calculating insider risks, or if you trust a person, do we assume the risk is reduced?

Zero-trust models require authentication and authorization to be granted to a user before they can access applications and data within an organization. This can be an effective strategy for implementing the principle of least privilege, a key best practice for reducing insider risk. Zero-trust does not address the potential for insider misuse of authorized access, however. Misuse of authorized access is the nature of many insider incidents, which highlights the enduring challenge of managing insider risk: there will always be a need to grant individuals authorized access to the organization's critical assets, so there will always be a potential for that authorized access to be misused.

Q: Given that "lower incidence rate" could reflect program success or failure to detect insider incidents, there is a need to identify better ways to define and derive measures of effectiveness. What might measurements of effectiveness look like for different organizations?

Measures of effectiveness can, and should, change as an insider threat program matures. Nascent insider threat programs tend to measure their effectiveness in terms of the number of incidents or potential incidents that were prevented, detected, or responded to. As mentioned in the question, as an insider threat program matures, the controls it is putting in place are designed to reduce the number of incidents over time.

If a maturing insider threat program continues to rely on the same measures of effectiveness, drops in these numbers may be interpreted by some senior leaders as a decrease in the insider threat program's effectiveness. To address this, maturing insider threat programs should adopt risk-based metrics that demonstrate the insider threat program's ability to reduce the impact and likelihood of insider incidents to acceptable levels for their organization's risk appetite.

Q: How transparent should a program be on what is done (or not done) within the insider threat program? How do you maintain the privacy of the staff and protect the program's "secret stuff"?

Transparency about an insider threat program's operations will vary from organization to organization, based on a number of factors:

  • compliance requirements, which may be mandated by government or regulated by industry
  • organizational culture
  • communication plans regarding the insider threat program, which would be influenced by the aforementioned factors

At a minimum, programs should keep details of specific tools, techniques, and alerts on a need-to-know basis only. The program can protect its "secret stuff," such as tools and techniques, by requiring insider threat program staff to receive training on their roles and responsibilities regarding confidentiality and to sign non-disclosure agreements (NDAs) about program operations.

To protect the privacy of staff, organizations should incorporate privacy impact assessments into their processes, as detailed in the Common Sense Guide to Mitigating Insider Threats, Sixth Edition.

Panel Discussion: Managing Insider Risk During a Pandemic, Q&A

The CERT Division's Michael Theis moderated a panel discussion by industry experts on the challenges, successes, and lessons learned from operating an insider threat program during a pandemic.

Q: Are you seeing a shift in insider threat incidents for specific industry sectors that could be directly related to COVID-19? If so, how are these incidents being carried out?

While it may be too soon to tell for sure, the pandemic has led to a "tale of two insider threat programs": some are being effectively shuttered as organizations attempt to reduce operating costs, while others are being expanded and empowered to manage an overwhelmingly remote workforce. Anecdotally, it would seem that in the midst of the change to remote work, organizations were seeing a decrease in insider threat incidents. However, as working remotely becomes the new normal, incidents may be rising again. More details on how programs have been affected are available in the full panel video.

Q: When departing employees do not return company-owned devices during the pandemic, how do we ensure that all that information on those devices does not leak or find itself in the wrong hands?

If organization-owned devices in the possession of departing employees lack technical controls to prevent leakages (e.g., account termination and lockout, full-disk encryption, etc.), then organizations can consider the administrative and legal mitigations at their disposal. Training for departing employees that reminds them of their responsibilities to safeguard the organization's data and their obligations to return devices may help. Additionally, requiring the employees to reaffirm their acceptance of any NDAs and intellectual property (IP) policies is recommended. Of course, these procedures are not limited to pandemic conditions and can be implemented for any departing employees. Ideally, both technical and administrative controls will be in place to prevent misuse of company-owned data and devices after employee departure.

Looking Ahead: NITC Symposium, Day Two

The second half of the symposium will be held on Thursday, September 24, from 1 to 3 p.m. Eastern Time. We encourage anyone in the insider threat community to register for the event, which will feature a keynote by Dawn Cappelli, global security and chief information security officer of Rockwell Automation, called "Strategies for Maturing an Insider Risk Program." We will also have a panel discussion, "Risk and Resilience Management for the Counter-Insider-Threat Mission," moderated by Dan Costa, with government and industry panelists Scott Breor, Charles Margiotta, JT Mendoza, and Brad Millick.

If you have any questions about the topics discussed in the symposium, feel free to contact us at insider-threat-feedback@cert.org. During the symposium, we will collect any unanswered questions from the Q&A session and address them via an upcoming blog post.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed