Unintentional Insider Threats: The Non-Malicious Within
Hello, I'm David Mundie, a CERT cybersecurity researcher. This post is about the research CERT is doing on the unintentional insider threat. Organizations often suffer from individuals who have no ill will or malicious motivation, but whose actions cause harm. The CERT Insider Threat Center conducts work, sponsored by the Department of Homeland Security's Federal Network Resiliency Division, that examines such cases. We call this category of individuals the "unintentional insider threat" (UIT).
This research includes
- creating a definition of UIT
- collecting and reviewing over 60 cases of UIT
- analyzing contributing factors and observables in those cases
- recommending preliminary ways to mitigate unintentional insider threats
For the purposes of our research, the team built a working definition of an unintentional insider threat:
Our preliminary study of the UIT problem identified a number of contributing factors and mitigation strategies. The malicious insider threat and the UIT share many contributing factors that relate to broad areas in security practice, organizational processes, management practices, security culture, etc. However, there are significant differences. Human error plays a major role in UIT. Countermeasures and mitigations to decrease UIT incidents should include strategies for
- improving and maintaining productive work environments
- fostering healthy security cultures
- increasing usability that decreases the likelihood of human errors that lead to UIT incidents
Most UIT cases we analyzed featured data release (i.e., information disclosure) by users through simple negligence and without malware or other external actions.
While training and awareness programs can help address the challenges of accidental disclosure of sensitive information due to errors and lapses in judgment, there are limitations to what training approaches might be expected to accomplish. A comprehensive mitigation strategy should therefore include new and more effective automated safeguards that seek to provide "fail-safe" measures when human factors/organizational systems fail to completely eliminate human errors associated with risk perception and other cognitive/decision processes.
Future research on UITs is recommended that focuses on underlying factors that contribute to accidental or non-malicious acts, versus technical problems that lead to UIT events or more effective mitigation strategies for these types of cases.
Can you identify additional factors that influence UIT? How can factors that influence UIT be mitigated? What future research into this space would be most useful? Please send your thoughts to us at firstname.lastname@example.org.