Archive: 2018-01

IPv6 deployment is on the rise. Google reported that as of July 14 2018, 23.94 percent of users accessed its site via IPv6, up 6.16 percent from that same date in 2017. Drafted in 1998 and an Internet Standard as of July 2017, Internet Protocol 6 (IPv6) is intended to replace IPv4 in assigning devices on the internet a unique identity. Plans for IPv6 got underway after it was realized that IPv4's cap of 4.3 billion addresses would not be sufficient to cover the number of devices accessing the internet. This blog post is the first in a series aimed at encouraging IPv6 adoption, whether at the enterprise-wide level, the organizational level, or the individual, home-user level.

It's common for large-scale cyber-physical systems (CPS) projects to burn huge amounts of time and money with little to show for it. As the minimum viable product (MVP) strategy of fast and focused stands in sharp contrast to the inflexible and ponderous product planning that has contributed to those fiascos, MVP has been touted as a useful corrective. The MVP strategy has become fixed in the constellation of Agile jargon and practices. However, trying to work out how to scale MVP for large and critical CPS, I found more gaps than fit. This is the first of three blog posts highlighting an alternative strategy that I created, the Minimum Viable Capability (MVC), which scales the essential logic of MVP for CPS. MVC adapts the intent of the MVP strategy--to focus on rapidly developing and validating only essential features--to systems that are too large, too complex, or too critical for MVP.

In recent days, the VPNFilter malware has attracted attention, much of it in the wake of a May 25 public service announcement from the FBI, as well as a number of announcements from vendors and security companies. In this blog post, I examine the VPNFilter malware attack by analyzing the vulnerabilities at play, how they were exploited, and the impact on the Internet. I also outline recommendations for the next generation of small Internet of Things (IoT) device manufacturers, including home routers, which were the target of VPNFilter malware. Because this post also emphasizes the prioritization of vulnerabilities that have significant or large-scale impact, I will recap recommendations made in the March 2017 blog post on the Mirai botnet.

DoD programs continue to experience cost overruns; the inadequacies of cost estimation were cited by the Government Accountability Office (GAO) as one of the top problem areas. A recent SEI blog post by my fellow researcher Robert Stoddard, Why Does Software Cost So Much?, explored SEI work that is aimed at improving estimation and management of the costs of software-intensive systems. In this post, I provide an example of how causal learning might be used to identify specific causal factors that are most responsible for escalating costs.

Runtime assurance (RA) has become a promising technique for ensuring the safe behavior of autonomous systems (such as drones or self-driving vehicles) whose behavior cannot be fully determined at design time. The Department of Defense (DoD) is increasingly focusing on the use of complex, non-deterministic systems to address rising software complexity and the use of machine learning techniques. In this environment, assuring software correctness has become a major challenge, especially in uncertain and contested environments. This post highlights work by a team of SEI researchers to create tools and techniques that will ensure the safety of distributed cyber-physical systems.

Bugs and weaknesses in software are common: 84 percent of software breaches exploit vulnerabilities at the application layer. The prevalence of software-related problems is a key motivation for using application security testing (AST) tools. With a growing number of application security testing tools available, it can be confusing for information technology (IT) leaders, developers, and engineers to know which tools address which issues. This blog post, the first in a series on application security testing tools, will help to navigate the sea of offerings by categorizing the different types of AST tools available and providing guidance on how and when to use each class of tool.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in deep learning, cyber intelligence, interruption costs, digital footprints on social networks, managing privacy and security, and network traffic analysis. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

When the rate of change inside an institution becomes slower than the rate of change outside, the end is in sight. - Jack Welch

In a world of agile everything, agile concepts are being applied in areas well beyond software development. At the NDIA Agile in Government Summit held in Washington, D.C. in June, Dr. George Duchak, the Deputy Assistant Secretary of Defense for Cyber, Command & Control, Communications & Networks, and Business Systems, spoke about the importance of agility to organizational success in a volatile, uncertain, complex, and ambiguous world. Dr. Duchak told the crowd that agile software development can't stand alone, but must be woven into the fabric of an organization and become a part of the way an organization's people, processes, systems and data interact to deliver value. The business architecture must be constructed for agility.

I first wrote about agile strategic planning in my March 2012 blog post, Toward Agile Strategic Planning. In this post, I want to expand that discussion to look closer at agile strategy, or short-cycle strategy development and execution, describe what it looks like when implemented, and examine how it supports organizational performance.

Part one of this series of blog posts on the collection and analysis of malware and storage of malware-related data in enterprise systems reviewed practices for collecting malware, storing it, and storing data about it. This second post in the series discusses practices for preparing malware data for analysis and discuss issues related to messaging between big data framework components.

Citing the need to provide a technical advantage to the warfighter, the Department of Defense (DoD) has recently made the adoption of cloud computing technologies a priority. Infrastructure as code (IaC), the process and technology of managing and provisioning computers and networks (physical and/or virtual) through scripts, is a key enabler for efficient migration of legacy systems to the cloud. This blog post details research aimed at developing technology to help software sustainment organizations automatically recover the deployment baseline and create the needed IaC artifacts with minimal manual intervention and no specialized knowledge about the design of the deployed system. This project will develop technology to automatically recover a deployment model from a running system and create IaC artifacts from that model.

The growth of big data has affected many fields, including malware analysis. Increased computational power and storage capacities have made it possible for big-data processing systems to handle the increased volume of data being collected. In addition to collecting the malware, new ways of analyzing and visualizing malware have been developed. In this blog post--the first in a series on using a big-data framework for malware collection and analysis--I will review various options and tradeoffs for dealing with malware collection and storage at scale.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in virtual integration, blockchain programming, Agile DevOps, software innovations, cybersecurity engineering and software assurance, threat modeling, and blacklist ecosystem analysis. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

Well-known asymmetries pit cyber criminals with access to cheap, easy-to-use tools against government and industry organizations that must spend more and more to keep information and assets safe. To help reverse this imbalance, the SEI is conducting a study sponsored by the U.S. Office of the Director of National Intelligence to understand cyber intelligence best practices, common challenges, and future technologies that we will publish at the conclusion of the project. Through interviews with U.S.-based organizations from a variety of sectors, we are identifying tools, practices, and resources that help those organizations make informed decisions that protect their information and assets. This blog post describes preliminary findings from the interviews we have conducted so far. Our final report, which will include an anonymized look at the cyber intelligence practices of all the organizations we interviewed, will be released after the conclusion of the study in 2019.

For many DoD missions, our ability to collect information has outpaced our ability to analyze that information. Graph algorithms and large-scale machine learning algorithms are a key to analyzing the information agencies collect. They are also an increasingly important component of intelligence analysis, autonomous systems, cyber intelligence and security, logistics optimization, and more. In this blog post, we describe research to develop automated code generation for future-compatible graph libraries: building blocks of high-performance code that can be automatically generated for any future platform.

The size of aerospace software, as measured in source lines of code (SLOC), has grown rapidly. Airbus and Boeing data show that SLOC have doubled every four years. The current generation of aircraft software exceeds 25 million SLOC (MSLOC). These systems must satisfy safety-critical, embedded, real-time, and security requirements. Consequently, they cost significantly more than general-purpose systems. Their design is more complex, due to quality attribute requirements, high connectivity among subsystems, and sensor dependencies--each of which affects all system development phases but especially design, integration, and verification and validation.

Numerous tools exists to help detect flaws in code. Some of these are called flaw-finding static analysis (FFSA) tools because they identify flaws by analyzing code without running it. Typical output of an FFSA tool includes a list of alerts for specific lines of code with suspected flaws. This blog post presents our initial work on applying static analysis test suites in a novel way by automatically generating a large amount of labeled data for a wide variety of code flaws to jump-start static analysis alert classifiers (SAACs). SAACs are designed to automatically estimate the likelihood that any given alert indicates a genuine flaw.

This blog post is also authored by Forrest Shull.

Modern software systems are constantly exposed to attacks from adversaries that, if successful, could prevent a system from functioning as intended or could result in exposure of confidential information. Accounts of credit card theft and other types of security breaches concerning a broad range of cyber-physical systems, transportation systems, self-driving cars, and so on, appear almost daily in the news. Building any public-facing system clearly demands a systematic approach for analyzing security needs and documenting mitigating requirements. In this blog post, which was excerpted from a recently published technical report, we present the Hybrid Threat Modeling Method that our team of researchers developed after examining popular methods.

Cost estimation was cited by the Government Accountability Office (GAO) as one of the top two reasons why DoD programs continue to have cost overruns. How can we better estimate and manage the cost of systems that are increasingly software intensive? To contain costs, it is essential to understand the factors that drive costs and which ones can be controlled. Although we understand the relationships between certain factors, we do not yet separate the causal influences from non-causal statistical correlations. In this blog post, we explore how the use of an approach known as causal learning can help the DoD identify factors that actually cause software costs to soar and therefore provide more reliable guidance as to how to intervene to better control costs.

When considering best practices in egress filtering, it is important to remember that egress filtering is not focused on protecting your network, but rather on protecting other organizations' networks. For example, the May 2017 Wannacry Ransomware attack is believed to have exploited an exposed vulnerability in the server message block (SMB) protocol and was rapidly spread via communications over port 445. Egress and ingress filtering of port 445 would have helped limit the spread of Wannacry. In this post--a companion piece to Best Practices for Network Border Protection, which highlighted best practices for filtering inbound traffic--I explore best practices and considerations for egress filtering.

Almost 30 years ago, the SEI's CERT Coordination Center established a program that enabled security researchers in the field to report vulnerabilities they found in an organization's software or systems. But this capability did not always include vulnerabilities found on Department of Defense (DoD) sites. In 2017, the SEI helped expand vulnerability reporting to the DoD by establishing the DoD Vulnerability Disclosure program. This blog post, which was adapted from an article in the recently published 2017 Year in Review, highlights our work on this program.

This post is co-authored by Thomas Scanlon.

When a private key in a public-key infrastructure (PKI) environment is lost or stolen, compromised end-entity certificates can be used to impersonate a principal (a singular and identifiable logical or physical entity, person, machine, server, or device) that is associated with it. An end-entity certificate is one that does not have certification authority to authorize other certificates. Consequently, the scope of a compromise or loss of an end-entity private key is limited to only those certificates whose keys were lost.

Since it is the certificate that provides the identity used for authorization, authentication of a compromised certificate can lead to critical consequences, such as loss of proprietary data or exposure of sensitive information. Compromised certificates can be used as client-authentication certificates in SSL to authenticate principals associated with the certificate (e.g., a principal mapped in Active Directory, LDAP, or another database) or they may be accepted as is, depending on the service. This blog post describes strategies for how to recover and minimize consequences from the loss or compromise of an end-entity private key.

As detailed in last week's post, SEI researchers recently identified a collection of vulnerabilities and risks faced by organizations moving data and applications to the cloud. In this blog post, we outline best practices that organizations should use to address the vulnerabilities and risks in moving applications and data to cloud services.

These practices are geared toward small and medium-sized organizations; however, all organizations, independent of size, can use these practices to improve the security of their cloud usage. It is important to note that these best practices are not complete and should be complemented with practices provided by cloud service providers, general best cybersecurity practices, regulatory compliance requirements, and practices defined by cloud trade associations, such as the Cloud Security Alliance.

As we stressed in our previous post, organizations must perform due diligence before moving data or applications to the cloud. Cloud service providers (CSPs) use a shared responsibility model for security. The CSP accepts responsibility for some aspects of security. Other aspects of security are shared between the CSP and the consumer or remain the sole responsibility of the consumer.

This post details four important practices, and specific actions, that organizations can use to feel secure in the cloud.

Organizations continue to develop new applications in or migrate existing applications to cloud-based services. The federal government recently made cloud-adoption a central tenet of its IT modernization strategy. An organization that adopts cloud technologies and/or chooses cloud service providers (CSP)s and services or applications without becoming fully informed of the risks involved exposes itself to a myriad of commercial, financial, technical, legal, and compliance risks. In this blog post, we outline 12 risks, threats, and vulnerabilities that organizations face when moving application or data to the cloud.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in cyber risk and resilience management, Agile/DevOps and risk management, best practices in insider threat, and dynamic design analysis. This post also includes a link to our recently published 2017 SEI Year in Review. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

As U.S. Department of Defense (DoD) mission-critical and safety-critical systems become increasingly connected, exposure from security infractions is likewise increasing. In the past, system developers had worked on the assumption that, because their systems were not connected and did not interact with other systems, they did not have to worry about security. "Closed" system assumptions, however, are no longer valid, and security threats affect the safe operation of systems.

To address exponential growth in the cost of system development due to the increased complexity of interactions and mismatched assumptions in embedded software systems, the safety-critical system community has embraced virtual system integration and analysis of embedded systems. In this blog post, I describe our efforts to demonstrate how virtual system integration can be extended to address security concerns at the architecture level and complement code-level security analysis.

In a previous blog post, we addressed how machine learning is becoming ever more useful in cybersecurity and introduced some basic terms, techniques, and workflows that are essential for those who work in machine learning. Although traditional machine learning methods are already successful for many problems, their success often depends on choosing and extracting the right features from a dataset, which can be hard for complex data. For instance, what kinds of features might be useful, or possible to extract, in all the photographs on Google Images, all the tweets on Twitter, all the sounds of a spoken language, or all the positions in the board game Go? This post introduces deep learning, a popular and quickly-growing subfield of machine learning that has had great success on problems about these datasets, and on many other problems where picking the right features for the job is hard or impossible.

DevOps is a set of development practices that emphasizes collaboration, communication, and automation throughout the application lifecycle. In DevOps, all stakeholders--including IT operations staff, testers, developers, customers, and security personnel--are embedded from the inception of the project to its end. This blog post describes SEI research and customer engagements aimed at applying DevOps practices that are typically used at the end of the lifecycle to automate governance at the beginning of the development timeline.

In the SEI's examination of the software sustainment phase of the Department of Defense (DoD) acquisition lifecycle, we have noted that the best descriptor for sustainment efforts for software is "continuous engineering." Typically, during this phase, the hardware elements are repaired or have some structural modifications to carry new weapons or sensors. Software, on the other hand, continues to evolve in response to new security threats, new safety approaches, or new functionality provided within the system of systems. In this blog post, I will examine the intersection of three themes--product line practices, software sustainment, and public-private partnerships--that emerged during our work with one government program. I will also highlight some issues we have uncovered that deserve further discussion and research.

As the use of unmanned aircraft systems (UASs) increases, the volume of potentially useful video data that UASs capture on their missions is straining the resources of the U.S. military that are needed to process and use this data. This publicly released video is an example of footage captured by a UAS in Iraq. The video shows ISIS fighters herding civilians into a building. U.S. forces did not fire on the building because of the presence of civilians. Note that this video footage was likely processed by U.S. Central Command (CENTCOM) prior to release to the public to highlight important activities within the video, such as ISIS fighters carrying weapons, civilians being herded into the building to serve as human shields, and muzzle flashes emanating from the building.

Micro-expressions--involuntary, fleeting facial movements that reveal true emotions--hold valuable information for scenarios ranging from security interviews and interrogations to media analysis. They occur on various regions of the face, last only a fraction of a second, and are universal across cultures. In contrast to macro-expressions like big smiles and frowns, micro-expressions are extremely subtle and nearly impossible to suppress or fake. Because micro-expressions can reveal emotions people may be trying to hide, recognizing micro-expressions can aid DoD forensics and intelligence mission capabilities by providing clues to predict and intercept dangerous situations. This blog post, the latest highlighting research from the SEI Emerging Technology Center in machine emotional intelligence, describes our work on developing a prototype software tool to recognize micro-expressions in near real-time.