by Dan Klinedinst
As the world becomes increasingly interconnected through technology, information security vulnerabilities emerge from the deepening complexity. Unexpected interactions between hardware and software components can magnify the impact of a vulnerability. As technology continues its shift away from the PC-centric environment of the past to a cloud-based, perpetually connected world, it exposes sensitive systems and networks in ways that were never before imagined.
The information security community must be prepared to address emerging systemic vulnerabilities. To help identify these vulnerabilities, a team of researchers--in addition to myself, the team included Joel Land and Kyle O'Meara--identified at-risk, emerging technologies by breaking down major technology trends over the next 10 years. This blog post, which is abstracted from our technical report on this work, highlights the findings of our research, which helps the Department of Homeland Security United States Computer Emergency Readiness Team (US-CERT) in their work towards vulnerability triage, outreach, and analysis.
As organizations' critical assets have become digitized and access to information has increased, the nature and severity of threats has changed. Organizations' own personnel--insiders--now have greater ability than ever before to misuse their access to critical organizational assets. Insiders know where critical assets are, what is important, and what is valuable. Their organizations have given them authorized access to these assets and the means to compromise the confidentiality, availability, or integrity of data. As organizations rely on cyber systems to support critical missions, a malicious insider who is trying to harm an organization can do so through, for example, sabotaging a critical IT system or stealing intellectual property to benefit a new employer or a competitor. Government and industry organizations are responding to this change in the threat landscape and are increasingly aware of the escalating risks. CERT has been a widely acknowledged leader in insider threat since it began investigating the problem in 2001. The CERT Guide to Insider Threat was inducted in 2016 into the Palo Alto Networks Cybersecurity Canon, illustrating its value in helping organizations understand the risks that their own employees pose to critical assets. This blog post describes the challenge of insider threats, approaches to detection, and how machine learning-enabled software helps provide protection against this risk.
Could software save lives after a natural disaster? Meteorologists use sophisticated software-reliant systems to predict a number of pathways for severe and extreme weather events, such as hurricanes, tornados, and cyclones. Their forecasts can trigger evacuations that remove people from danger.
In this blog post, I explore key technology enablers that might pave the path toward achieving an envisioned end-state capability for software that would improve decision-making and response for disaster managers and warfighters in a modern battlefield, along with some technology deficits that we need to address along the way.
This post is also authored by Matt Sisk, the lead author of each of the tools detailed in this post (bulk query, autogeneration, and all regex).
The number of cyber incidents affecting federal agencies has continued to grow, increasing about 1,300 percent from fiscal year 2006 to fiscal year 2015, according to a September 2016 GAO report. For example, in 2015, agencies reported more than 77,000 incidents to US-CERT, up from 67,000 in 2014 and 61,000 in 2013. These incident reports come from a diverse community of federal agencies, and each may contain observations of problematic activity by a particular reporter. As a result, reports vary in content, context, and in the types of data they contain. Reports are stored in the form of 'tickets' that assign and track progress toward closure.
This blog post is the first in a two-part series on our work with US-CERT to discover and make better use of data in cyber incident tickets, which can be notoriously diverse. Specifically, this post focuses on work we have done to improve useful data extraction from cybersecurity incident reports.
The first blog entry in this series introduced the basic concepts of multicore processing and virtualization, highlighted their benefits, and outlined the challenges these technologies present. The second post addressed multicore processing, whereas the third post concentrated on virtualization via virtual machines. In this fourth post in the series, I define virtualization via containers, list its current trends, and examine its pros and cons, including its safety and security ramifications.
This posting is the third in a series that focuses on multicore processing and virtualization, which are becoming ubiquitous in software development. The first blog entry in this series introduced the basic concepts of multicore processing and virtualization, highlighted their benefits, and outlined the challenges these technologies present. The second post addressed multicore processing. This third posting concentrates on virtualization via virtual machines (VMs). Below I define the relevant concepts underlying virtualization via VMs, list its current trends, and examine its pros and cons.
As computers become more powerful and ubiquitous, software and software-based systems are increasingly relied on for business, governmental, and even personal tasks. While many of these devices and apps simply increase the convenience of our lives, some--known as critical systems--perform business- or life-preserving functionality. As they become more prevalent, securing critical systems from accidental and malicious threats has become both more important and more difficult. In addition to classic safety problems, such as ensuring hardware reliability, protection from natural phenomena, etc., modern critical systems are so interconnected that security threats from malicious adversaries must also be considered. This blog post is adapted from a new paper two colleagues (Eugene Vasserman and John Hatcliff, both at Kansas State University) and I wrote that proposes a theoretical basis for simultaneously analyzing both the safety and security of a critical system.
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts and webinars highlighting our work in coordinated vulnerability disclosure, scaling Agile methods, automated testing in Agile environments, ransomware, and Android app analysis. These publications highlight the latest work of SEI technologists in these areas. One SEI Special Report presents data related to DoD software projects and translated it into information that is frequently sought-after across the DoD. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.