SEI Insights

SEI Blog

The Latest Research in Software Engineering and Cybersecurity

As the pace of software delivery increases, organizations need guidance on how to deliver high-quality software rapidly, while simultaneously meeting demands related to time-to-market, cost, productivity, and quality. In practice, demands for adding new features or fixing defects often take priority. However, when software developers are guided solely by project management measures, such as progress on requirements and defect counts, they ignore the impact of architectural dependencies, which can impede the progress of a project if not properly managed. In previous posts on this blog, my colleague Ipek Ozkaya and I have focused on architectural technical debt, which refers to the rework and degraded quality resulting from overly hasty delivery of software capabilities to users. This blog post describes a first step towards an approach we developed that aims to use qualitative architectural measures to better inform quantitative code quality metrics.

Safety-critical avionics, aerospace, medical, and automotive systems are becoming increasingly reliant on software. Malfunctions in these systems can have significant consequences including mission failure and loss of life. So, they must be designed, verified, and validated carefully to ensure that they comply with system specifications and requirements and are error free. In the automotive domain, for example, cars contain many electronic control units (ECU)--today's standard vehicle can contain up to 30 ECUs--that communicate to control systems such as airbag deployment, anti-lock brakes, and power steering.

Government agencies, including the departments of Defense, Veteran Affairs, and Treasury, are being asked by their government program office to adopt Agile methods. These are organizations that have traditionally utilized a "waterfall" life cycle model (as epitomized by the engineering "V" charts). Programming teams in these organizations are accustomed to being managed via a series of document-centric technical reviews that focus on the evolution of the artifacts that describe the requirements and design of the system rather than its evolving implementation, as is more common with Agile methods. Due to these changes, many struggle to adopt Agile practices. For example, acquisition staff often wonder how to fit Agile measurement practices into their progress tracking systems.

To deliver enhanced integrated warfighting capability at lower cost across the enterprise and over the lifecycle, the Department of Defense (DoD) must move away from stove-piped solutions and towards a limited number of technical reference frameworks based on reusable hardware and software components and services. There have been previous efforts in this direction, but in an era of sequestration and austerity, the DoD has reinvigorated its efforts to identify effective methods of creating more affordable acquisition choices and reducing the cycle time for initial acquisition and new technology insertion. This blog posting is part of an ongoing series on how acquisition professionals and system integrators can apply Open Systems Architecture (OSA)practices to decompose large monolithic business and technical designs into manageable, capability-oriented frameworks that can integrate innovation more rapidly and lower total ownership costs. The focus of this posting is on the evolution of DoD combat systems from ad hoc stovepipes to more modular and layered architectures.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in Secure Java and Android Coding, Cybersecurity Capability Measurement, Managing Insider Threat, the CERT Resilience Management Model, Network Situational Awareness, and Security and Survivability. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

The verification and validation of requirements are a critical part of systems and software engineering. The importance of verification and validation (especially testing) is a major reason that the traditional waterfall development cycle underwent a minor modification to create the V modelthat links early development activities to their corresponding later testing activities. This blog post introduces three variants on the V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.

In early 2012, a backdoor Trojan malware named Flame was discovered in the wild. When fully deployed, Flame proved very hard for malware researchers to analyze. In December of that year, Wired magazine reported that before Flame had been unleashed, samples of the malware had been lurking, undiscovered, in repositories for at least two years. As Wired also reported, this was not an isolated event. Every day, major anti-virus companies and research organizations are inundated with new malware samples.

The size and complexity of aerospace software systems has increased significantly in recent years. When looking at source lines of code (SLOC), the size of systems has doubled every four years since the mid 1990s, according to a recent SEI technical report. The 27 million SLOC that will be produced from 2010 to 2020 is expected to exceed $10 billion. These increases in size and cost have also been accompanied by significant increases in errors and rework after a system has been deployed. Mismatched assumptions between hardware, software, and their interactions often result in system problems that are detected only after the system has been deployed when rework is much more expensive to complete.