Software Engineering Institute | Carnegie Mellon University

SEI Insights

SEI Blog

The Latest Research in Software Engineering and Cybersecurity

Shift left is a familiar exhortation to teams and organizations engaged in Agile and Lean software development. It most commonly refers to incorporating test practices and an overall test sensibility early in the software development process (although it may also be applied in a DevOps context to the need to pull forward operations practices). Shift left sounds reasonably straightforward: just take the tasks that are on the right-hand side of your timeline and pull them forward (i.e., shift them to the left). As this post describes, however, there are some subtleties and qualifications you should consider in order to realize the benefits of a shift left strategy. These include targeting test goals, shifting-left the test mindset, and adopting shift-left enabling practices at the small team level and at scale.

It is widely recognized that the Department of Defense (DoD) needs to have a nimble response to nimble adversaries. However, the inflexibility of many DoD development and acquisition practices begets inflexible architectures that often slow progress and increase risk to operational forces. This rejection of modern development methods actually increases program risk and extends development timelines, effectively reducing the value of the DoD's acquisition portfolio. As a result, the current lack of capacity for breadth and pace of change impedes our ability to evolve capability quickly and robustly enough to meet new requirements in emerging technical and warfighting environments. The push to deliver artificial intelligence (AI)-based capabilities (and respond effectively to AI-based threats) increases both the urgency and importance of addressing the challenges.

In December, a grand jury indicted members of the APT10 group for a tactical campaign known as Operation Cloud Hopper, a global series of sustained attacks against managed service providers and, subsequently, their clients. These attacks aimed to gain access to sensitive intellectual and customer data. US-CERT noted that a defining characteristic of Operation Cloud Hopper was that upon gaining access to a cloud service provider (CSP) the attackers used the cloud infrastructure to hop from one target to another, gaining access to sensitive data in a wide range of government and industrial entities in healthcare, manufacturing, finance, and biotech in at least a dozen countries . In this blog post, part of an ongoing series of posts on cloud security, I explore the tactics used in Operation Cloud Hopper and whether the attacks could have been prevented by applying the best practices that we outlined for helping organizations keep their data and applications secure in the cloud.

Software-enabled capabilities are essential for our nation's defense systems. I recently served on a Defense Science Board (DSB) Task Force whose purpose was to determine whether iterative development practices such as Agile are applicable to the development and sustainment of software for the Department of Defense (DoD).

The resulting report, Design and Acquisition of Software for Defense Systems, made seven recommendations on how to improve software acquisition in defense systems:

  1. A key evaluation criterion in the source selection process should be the efficacy of the offeror's software factory.
  2. The DoD and its defense-industrial-base partners should adopt continuous iterative development best practices for software, including through sustainment.
  3. Implement risk reduction and metrics best practices in new program and formal program acquisition strategies
  4. For ongoing development programs, task program managers and program executive officers with creating plans to transition to a software factory and continuous iterative development.
  5. Over the next two years, acquisition commands in the various branches of the military need to develop workforce competency and a deep familiarity of current software development techniques.
  6. Starting immediately, the USD(R&E) should direct that requests for proposals (RFPs) for acquisition programs entering risk reduction and full development should specify the basic elements of the software framework supporting the software factory.
  7. Develop best practices and methodologies for the independent verification and validation (V&V) for machine learning (ML).

All seven recommendations are important for the DoD. The U.S. Congress therefore mandated that the DoD implement them, in the 2019 National Defense Authorization Act (NDAA), section 868. Specifically, the 2019 NDAA states, "Not later than 18 months after the date of the enactment of this Act, the Secretary of Defense shall . . . commence implementation of each recommendation submitted as part of the final report of the Defense Science Board Task Force on the Design and Acquisition of Software for Defense Systems."

As this post details, while some of the recommendations are particular to DoD's practices, two of them— the modern software factory and V&V for ML— stand out in their importance for software engineering research.

Systems engineers working today face many challenges, both in building the complex systems of systems of the future and in building the complex systems of which they are composed. Systems engineers need to be able to design around stable requirements when there are long-lead manufactured items required, and they also need to evolve the design along with changing requirements for larger systems. Software plays an integral role in helping systems engineers accomplish these goals.

The importance of software engineering to systems engineering, and vice-versa, cannot be overstated. As I stated in an earlier blog post

Systems engineers are responsible for ensuring that every part of the system works with every other, and because software represents the intelligence of human-made systems, it is especially important to the integration of the system. Yet many chief systems engineers and program managers have had more experience in mechanical or electrical engineering than in software or software engineering.

In this post, I examine whether the ways in which systems engineers talk about software has changed in the two decades by examining INCOSE's Systems Engineering journal. The reason for this inquiry is to understand whether systems engineers have come to accept that software is a huge and growing fraction of the systems they engineer and to understand how to partner with them on programs and in acquisition efforts.

The Internet of Things (IoT) is insecure. The Jeep hack received a lot of publicity, and there are various ways to hack ATMs, with incidents occurring with increasing regularity. Printers in secure facilities have been used to exfiltrate data from the systems to which they were connected, and even a thermometer in a casino's fish tank was used to gain access to the casino's infrastructure and extract data about customers, gamblers, etc. In this blog post, I describe how the SEI CERT Coding Standards work and how they can reduce risk in Internet-connected systems. This is the first installment in a two-part series; in Part 2, I will describe how to use static analysis to enforce the CERT coding rules.

Addressing cybersecurity for а complex system, especially for а cyber-physical system of systems (CPSoS), requires a strategic approach during the entire lifecycle of the system. Examples of CPSoS include rail transport systems, power plants, and integrated air-defense capability. All these systems consist of large physical, cyber-physical, and cyber-only subsystems with complex dynamics. In the first blog post in this series, I summarized 12 available threat-modeling methods (TMMs). In this post, I will identify criteria for choosing and evaluating a threat-modeling method (TMM) for a CPSoS.

In 2017 and 2018, the United States witnessed a milestone year of climate and weather-related disasters from droughts and wildfires to cyclones and hurricanes. Increasingly, satellites are playing an important role in helping emergency responders assess the damage of a weather event and find victims in its aftermath. Most recently satellites have tracked the devastation wrought by the California wildfires from space. The United States military, which is often the first on the scene of a natural disaster, is increasingly interested in the use of deep learning to automate the identification of victims and structures in satellite imagery to assist with humanitarian assistance disaster relief (HADR) efforts.