SEI Insights

SEI Blog

The Latest Research in Software Engineering and Cybersecurity

To deliver enhanced integrated warfighting capability at lower cost across the enterprise and over the lifecycle, the Department of Defense (DoD) must move away from stove-piped solutions and towards a limited number of technical reference frameworks based on reusable hardware and software components and services. There have been previous efforts in this direction, but in an era of sequestration and austerity, the DoD has reinvigorated its efforts to identify effective methods of creating more affordable acquisition choices and reducing the cycle time for initial acquisition and new technology insertion. This blog posting is part of an ongoing series on how acquisition professionals and system integrators can apply Open Systems Architecture (OSA)practices to decompose large monolithic business and technical designs into manageable, capability-oriented frameworks that can integrate innovation more rapidly and lower total ownership costs. The focus of this posting is on the evolution of DoD combat systems from ad hoc stovepipes to more modular and layered architectures.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in Secure Java and Android Coding, Cybersecurity Capability Measurement, Managing Insider Threat, the CERT Resilience Management Model, Network Situational Awareness, and Security and Survivability. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

The verification and validation of requirements are a critical part of systems and software engineering. The importance of verification and validation (especially testing) is a major reason that the traditional waterfall development cycle underwent a minor modification to create the V model that links early development activities to their corresponding later testing activities. This blog post introduces three variants on the V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.

In early 2012, a backdoor Trojan malware named Flame was discovered in the wild. When fully deployed, Flame proved very hard for malware researchers to analyze. In December of that year, Wired magazine reported that before Flame had been unleashed, samples of the malware had been lurking, undiscovered, in repositories for at least two years. As Wired also reported, this was not an isolated event. Every day, major anti-virus companies and research organizations are inundated with new malware samples.

The size and complexity of aerospace software systems has increased significantly in recent years. When looking at source lines of code (SLOC), the size of systems has doubled every four years since the mid 1990s, according to a recent SEI technical report. The 27 million SLOC that will be produced from 2010 to 2020 is expected to exceed $10 billion. These increases in size and cost have also been accompanied by significant increases in errors and rework after a system has been deployed. Mismatched assumptions between hardware, software, and their interactions often result in system problems that are detected only after the system has been deployed when rework is much more expensive to complete.

Analyzing Routing Tables

By on in

Occasionally this blog will highlight different posts from the SEI blogosphere. Today we are highlighting a post from the CERT/CC Blog by Timur Snoke, a member of the technical staff in the SEI's CERT Division. This post describes maps that Timur has developed using Border Gateway Protocol (BGP) routing tables to show the evolution of public-facing autonomous system numbers (ASN). These maps help analysts inspect the BPG routing tables to reveal disruptions to an organization's infrastructure. They also help analysts glean geopolitical information for an organization, country, or a city-state, which helps them identify how and when network traffic is subverted to travel nefarious alternative paths to place communications deliberately at risk.

New data sources, ranging from diverse business transactions to social media, high-resolution sensors, and the Internet of Things, are creating a digital tidal wave of big data that must be captured, processed, integrated, analyzed, and archived. Big data systems storing and analyzing petabytes of data are becoming increasingly common in many application areas. These systems represent major, long-term investments requiring considerable financial commitments and massive scale software and system deployments.

When life- and safety-critical systems fail (and this happens in many domains), the results can be dire, including loss of property and life. These types of systems are increasingly prevalent, and can be found in the altitude and control systems of a satellite, the software-reliant systems of a car (such as its cruise control and anti-lock braking system), or medical devices that emit radiation. When developing such systems, software and systems architects must balance the need for stability and safety with stakeholder demands and time-to-market constraints. The Architectural Analysis & Design Language (AADL) helps software and system architects address the challenges of designing life- and safety-critical systems by providing a modeling notation with well-defined real-time and architectural semantics that employ textual and graphic representations. This blog posting, part of an ongoing series on AADL, focuses on the initial foundations of AADL.