SEI Insights

SEI Blog

The Latest Research in Software Engineering and Cybersecurity

Dynamic Network Defense (or Moving Target Defense) is based on a simple premise: a moving target is harder to attack than a stationary target. In recent years the government has invested substantially into moving target and adaptive cyber defense. This rapidly growing field has seen recent developments of many new technologies--defenses that range from shuffling of client-to-server assignments to protect against distributed denial-of-service (DDOS) attacks, to packet header rewriting, to rebooting servers. As researchers develop new technologies, they need a centralized reference platform where new technologies can be vetted to see where they complement each other and where they do not, as well as a standard against which future technologies can be evaluated. This blog post describes work led by researchers at the SEI's Emerging Technology Center (ETC) to develop a secure, easy-to-use, consistent development and deployment path to organize dynamic defenses.

Much of the malware that we analyze includes some type of remote access capability. Malware analysts broadly refer to this type of malware as a remote access tool (RAT). RAT-like capabilities are possessed by many well-known malware families, such as DarkComet. As described in this series of posts, CERT researchers are exploring ways to automate common malware analysis activities. In a previous post, I discussed the Pharos Binary Analysis Framework and tools to support reverse engineering object-oriented code. In this post, I will explain how to statically characterize program behavior using application programming interface (API) calls and then discuss how we automated this reasoning with a malware analysis tool that we call ApiAnalyzer.

In cyber systems, the identities of devices can easily be spoofed and are frequent targets of cyber-attacks. Once an identity is fabricated, stolen or spoofed it may be used as a nexus to systems, thus forming a Sybil Attack. To address these and other problems associated with identity deception researchers at the Carnegie Mellon University Software Engineering Institute, New York University's Tandon School of Engineering and Courant Institute of Mathematical Sciences, and the University of Göttingen (Germany), collaborated to develop a deception-resistant identity management system inspired by biological systems; namely, ant colonies. This blog post highlights our research contributions.

This is the third installment in a series of three blog posts highlighting seven recommended practices for acquiring intellectual property. This content was originally published on the Cyber Security & Information Analysis Center's website online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment. The first post in the series explored the challenges to acquiring intellectual property. The second post in the series presented the first four of seven practices for acquiring intellectual property. This post will present the final three of seven practices for acquiring intellectual property as well as conditions under which organizations will derive the most benefit from recommended practices for acquiring intellectual property.

Listen to an audio recording of this blog post.

When I was a chief architect working in industry, I was repeatedly asked the same questions: What makes an architect successful? What skills does a developer need to become a successful architect? There are no easy answers to these questions. For example, in my experience, architects are most successful when their skills and capabilities match a project's specific needs. Too often, in answering the question of what skills make a successful architect, the focus is on skills such as communication and leadership. While these are important, an architect must have strong technical skills to design, model, and analyze the architecture. As this post will explain, as a software system moves through its lifecycle, each phase calls for the architect to use a different mix of skills. This post also identifies three failure patterns that I have observed working with industry and government software projects.

This is the second installment in a series of three blog posts highlighting seven recommended practices for acquiring intellectual property. This content was originally published on the Cyber Security & Information Analysis Center's website online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment. The first post in the series explored the challenges to acquiring intellectual property. This post, which can be read in its entirety on the SPRUCE website, will present the first four of seven best practices for acquiring intellectual property.

Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply Agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management. In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget.

This post was also co-authored by Carol Woody.

Increasingly, software development organizations are finding that a large number of their vulnerabilities stem from design weaknesses and not coding vulnerabilities. Recent statistics indicate that research should focus on identifying design weaknesses to alleviate software bug volume. In 2011, for example when MITRE released its list of the 25 most dangerous software errors, approximately 75 percent of those errors represented design weaknesses. Viewed through another lens, more than one third of the current 940 known common weakness enumerations (CWEs) are design weaknesses. Static analysis tools cannot find these weaknesses, so they are currently not addressed prior to implementation and typically are detected after the software has been executed when vulnerabilities are much more costly and time-consuming to address. In this blog post, the first in a series, we present a tool that supports a new architecture model that can identify structures in the design and code base that have a high likelihood of containing bugs, hidden dependencies in code bases, and structural design flaws.