In the first post in this two-part series, we covered five unique challenges that impact insider threat programs and hub analysts. The challenges included lack of adequate training, competing interests, acquiring data, analyzing data, and handling false positives.
As you read the new challenges introduced in this post, ask yourself the same questions: 1) How many of these challenges are ones you are facing today? 2) Are there challenges in this list that lead to an "aha" moment? 3) Are there challenges you are facing that did not make the list? 4) Do you need assistance with combating any of these challenges? Let us know your answers and thoughts via email at firstname.lastname@example.org.
This post was also authored by Andrew Hoover.
In Cybersecurity Architecture, Part 1: Cyber Resilience and Critical Service, we talked about the importance of identifying and prioritizing critical or high-value services and the assets and data that support them. In this post, we'll introduce our approach for reviewing the security of the architecture of information systems that deliver or support these services. We'll also describe our review's first areas of focus: System Boundary and Boundary Protection.
The purpose of this two-part blog series is to discuss five challenges that often plague insider threat programs and more specifically the analysts that are working in insider threat hubs. I am in a unique position to discuss this area because I have many years of experience working directly with operational insider threat programs of varying maturity levels. Thus I have a front-row vantage point to understand the challenges that analysts face on a daily basis. In this blog post, I will discuss some of the key challenges and associated recommendations (e.g., quick wins) facing many organizations.
The National Institute for Science and Technology (NIST) recently released version 1.1 of its Cybersecurity Framework (CSF). Organizations around the world--including the federal civilian government, by mandate--use the CSF to guide key cybersecurity activities. However, the framework's 108 subcategories can feel daunting. This blog post describes the Software Engineering Institute's recent efforts to group the 108 subcategories into 15 clusters of related activities, making the CSF more approachable for typical organizations. The post also gives example scenarios of how organizations might use the CSF Activity Clusters to facilitate more effective cybersecurity decision making.
In this blog series, I review topics related to deploying a text analytics capability for insider threat mitigation. In this segment, I continue the conversation by disambiguating terminology related to text analysis, summarizing methodological approaches for developing text analytics tools, and justifying how this capability can supplement an existing capability to monitor insider threat risk. In my next post, Acquiring or Deploying a Text Analytics Solution, I will discuss how organizations can think through the process of procuring or developing a custom in-house text analytics solution.
In this blog series I cover topics related to deploying a text analytics capability for insider threat mitigation. A text analytics capability is a means of measuring the risk employees may pose to an organization by monitoring the sentiment and emotion expressed in their communications.
According to the Verizon 2018 Data Breach Investigations Report, email was an attack vector in 96% of incidents and breaches that involved social actions (manipulation of people as a method of compromise). The report also says an average of 4% of people will fall for any given phish, and the more phishing emails they have clicked, the more likely they are to click again. The mantra of "more user training" may be helping with the phishing problem, but it isn't solving it. In this blog post, I will cover four technical methods for improving an organization's phishing defense. These methods are vendor- and tool-agnostic, don't require a large security team, and are universally applicable for small and large organizations alike.
Insider threat programs can better implement controls and detect malicious insiders when they communicate indicators of insider threat consistently and in a commonly accepted language. The Insider Threat Indicator Ontology is intended to serve as a standardized expression of potential indicators of malicious insider activity.