Shift left is a familiar exhortation to teams and organizations engaged in Agile and Lean software development. It most commonly refers to incorporating test practices and an overall test sensibility early in the software development process (although it may also be applied in a DevOps context to the need to pull forward operations practices). Shift left sounds reasonably straightforward: just take the tasks that are on the right-hand side of your timeline and pull them forward (i.e., shift them to the left). As this post describes, however, there are some subtleties and qualifications you should consider in order to realize the benefits of a shift left strategy. These include targeting test goals, shifting-left the test mindset, and adopting shift-left enabling practices at the small team level and at scale.
Software-enabled capabilities are essential for our nation's defense systems. I recently served on a Defense Science Board (DSB) Task Force whose purpose was to determine whether iterative development practices such as Agile are applicable to the development and sustainment of software for the Department of Defense (DoD).
The resulting report, Design and Acquisition of Software for Defense Systems, made seven recommendations on how to improve software acquisition in defense systems:
- A key evaluation criterion in the source selection process should be the efficacy of the offeror's software factory.
- The DoD and its defense-industrial-base partners should adopt continuous iterative development best practices for software, including through sustainment.
- Implement risk reduction and metrics best practices in new program and formal program acquisition strategies
- For ongoing development programs, task program managers and program executive officers with creating plans to transition to a software factory and continuous iterative development.
- Over the next two years, acquisition commands in the various branches of the military need to develop workforce competency and a deep familiarity of current software development techniques.
- Starting immediately, the USD(R&E) should direct that requests for proposals (RFPs) for acquisition programs entering risk reduction and full development should specify the basic elements of the software framework supporting the software factory.
- Develop best practices and methodologies for the independent verification and validation (V&V) for machine learning (ML).
All seven recommendations are important for the DoD. The U.S. Congress therefore mandated that the DoD implement them, in the 2019 National Defense Authorization Act (NDAA), section 868. Specifically, the 2019 NDAA states, "Not later than 18 months after the date of the enactment of this Act, the Secretary of Defense shall . . . commence implementation of each recommendation submitted as part of the final report of the Defense Science Board Task Force on the Design and Acquisition of Software for Defense Systems."
As this post details, while some of the recommendations are particular to DoD's practices, two of them—
the modern software factory and V&V for ML—
stand out in their importance for software engineering research.
Today, organizations build applications on top of existing platforms, frameworks, components, and tools; no one constructs software from scratch. Hence today's software development paradigm challenges developers to build trusted systems that include increasing numbers of largely untrusted components. Bad decisions are easy to make and have significant long-term consequences. For example, decisions based on outdated knowledge or documentation, or skewed to one criterion (such as performance) may lead to substantial quality problems, security risks, and technical debt over the life of the project. But there is typically a tradeoff between decision-making speed and confidence. Confidence increases with more experience, testing, analysis, and prototyping, but these all are costly and time consuming. In this blog post, I describe research that aims to increase both speed and confidence by applying existing automated-analysis techniques and tools (e.g., code and software-project repository analyses) mapping extracted information to common quality indicators from DoD projects.
Runtime assurance (RA) has become a promising technique for ensuring the safe behavior of autonomous systems (such as drones or self-driving vehicles) whose behavior cannot be fully determined at design time. The Department of Defense (DoD) is increasingly focusing on the use of complex, non-deterministic systems to address rising software complexity and the use of machine learning techniques. In this environment, assuring software correctness has become a major challenge, especially in uncertain and contested environments. This post highlights work by a team of SEI researchers to create tools and techniques that will ensure the safety of distributed cyber-physical systems.
Citing the need to provide a technical advantage to the warfighter, the Department of Defense (DoD) has recently made the adoption of cloud computing technologies a priority. Infrastructure as code (IaC), the process and technology of managing and provisioning computers and networks (physical and/or virtual) through scripts, is a key enabler for efficient migration of legacy systems to the cloud. This blog post details research aimed at developing technology to help software sustainment organizations automatically recover the deployment baseline and create the needed IaC artifacts with minimal manual intervention and no specialized knowledge about the design of the deployed system. This project will develop technology to automatically recover a deployment model from a running system and create IaC artifacts from that model.
Each year since the blog's inception, we present the 10 most-visited posts of the year in descending order ending with the most popular post. In this blog post, we present the 10 most popular posts published between January 1, 2017 and December 31, 2017.
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in cyber warfare, emerging technologies and their risks, domain name system blocking to disrupt malware, best practices in network border protection, robotics, technical debt, and insider threat and workplace violence. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
As computers become more powerful and ubiquitous, software and software-based systems are increasingly relied on for business, governmental, and even personal tasks. While many of these devices and apps simply increase the convenience of our lives, some--known as critical systems--perform business- or life-preserving functionality. As they become more prevalent, securing critical systems from accidental and malicious threats has become both more important and more difficult. In addition to classic safety problems, such as ensuring hardware reliability, protection from natural phenomena, etc., modern critical systems are so interconnected that security threats from malicious adversaries must also be considered. This blog post is adapted from a new paper two colleagues (Eugene Vasserman and John Hatcliff, both at Kansas State University) and I wrote that proposes a theoretical basis for simultaneously analyzing both the safety and security of a critical system.
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts and webinars highlighting our work in coordinated vulnerability disclosure, scaling Agile methods, automated testing in Agile environments, ransomware, and Android app analysis. These publications highlight the latest work of SEI technologists in these areas. One SEI Special Report presents data related to DoD software projects and translated it into information that is frequently sought-after across the DoD. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI technical reports, white papers, podcasts and webinars on supply chain risk management, process improvement, network situational awareness, software architecture, network time protocol as well as a podcast interview with SEI Fellow Peter Feiler. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
Since its debut on Jeopardy in 2011, IBM's Watson has generated a lot of interest in potential applications across many industries. I recently led a research team investigating whether the Department of Defense (DoD) could use Watson to improve software assurance and help acquisition professionals assemble and review relevant evidence from documents. As this blog post describes, our work examined whether typical developers could build an IBM Watson application to support an assurance review.
Over the past six months, we have developed new security-focused modeling tools that capture vulnerabilities and their propagation paths in an architecture. Recent reports (such as the remote attack surface analysis of automotive systems) show that security is no longer only a matter of code and is tightly related to the software architecture. These new tools are our contribution toward improving system and software analysis. We hope they will move forward other work on security modeling and analysis and be useful to security researchers and analysts. This post explains the motivation of our work, the available tools, and how to use them.