search menu icon-carat-right cmu-wordmark

The Latest Work from the SEI: DevSecOps, Artificial Intelligence, and Cybersecurity Maturity Model Certification

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, conference papers, and webcasts highlighting our work in DevSecOps, cybercrime and secure elections, software architecture, trustworthy artificial intelligence, and Cybersecurity Maturity Model Certification (CMMC). We have also included a webcast of a recent discussion on Department of Defense (DoD) software advances and future SEI work.

These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

A Discussion on DoD Software Advances and What's Next from SEI
by Tom Longstaff and Jeff Boleng

In this webcast, SEI Chief Technology Officer Tom Longstaff interviewed Jeff Boleng, a senior advisor to the U.S. Department of Defense, on recent DoD software advances and accomplishments. They discussed how the DoD is implementing recommendations from the Defense Science Board and the Defense Innovation Board on continuous development of best practices for software, source selection for evaluating software factories, risk reduction and metrics for new programs, developing workforce competency, and other advancements. Boleng and Longstaff also discussed how the SEI, the DoD's research and development center for software engineering, will adapt and build on this work to accomplish major changes at the DoD.
View the webcast.

Guide to Implementing DevSecOps for a System of Systems in Highly Regulated Environments
by Jose A. Morales, Richard Turner, Suzanne Miller, Peter Capell, Patrick R. Place, David James Shepard

DevSecOps (DSO) is an approach that integrates development (Dev), security (Sec), and delivery/operations (Ops) of software systems to reduce the time from need to capability and provide continuous integration and continuous delivery (CI/CD) with high software quality. The rapid acceptance and demonstrated effectiveness of DSO in software system development have led to proposals for its adoption in more complex projects. This document provides guidance to projects interested in implementing DSO in defense or other highly regulated environments, including those involving systems of systems.

The report provides rationale for adopting DSO and the dimensions of change required for that adoption. It introduces DSO, its principles, operations, and expected benefits. It describes objectives and activities needed to implement the DSO ecosystem, including preparation, establishment, and management. Preparation is necessary to create achievable goals and expectations and to establish feasible increments for building the ecosystem. Establishing the ecosystem includes evolving the culture, automation, processes, and system architecture from their initial state toward an initial capability. Managing the ecosystem includes measuring and monitoring both the health of the ecosystem and the performance of the organization. Additional information on the conceptual foundations of the DSO approach is also provided.
Download the report.

The Future of Cyber: Cybercrime
by David Hickton and Bobbie Stempfley

The culture of computers and information technology evolves quickly. In this environment, how can we build a culture of security through regulations and best practices when technology can move so much faster than legislative bodies? The Future of Cyber Podcast Series explores whether we can use the innovations of the past to address the problems of the future. In this podcast, David Hickton, founding director of the University of Pittsburgh Institute for Cyber Law, Policy, and Security, sits down with Bobbie Stempfley, director of the SEI's CERT Division, to talk about the future of cybercrime and secure elections.
View or listen to the podcast.

Hitting the Ground Running: Reviewing the 17 CMMC Level 1 Practices by Matthew Trevors and Gavin Jurecko

In this webcast, CMMC architects, Gavin Jurecko, and Matt Trevors provide insight on how to evaluate and assess your organization's readiness for meeting the practice requirements of CMMC Level 1.
View the webcast

CMMC: Securing the DIB Supply Chain with the Cybersecurity Maturity Model Certification Process by Software Engineering Institute

This document explains the concept of process maturity, how it applies to cybersecurity, and the steps an organization can take to navigate the five CMMC levels of process maturity.

Process maturity represents an organization's ability to institutionalize, or embed, its processes. Measuring cybersecurity process maturity indicates how well a company has ingrained practices and processes in the way it defines, executes, and manages work. This improves an organization's ability to both prevent and respond to a cyberattack.
Download the fact sheet.

Designing Trustworthy AI
by Carol Smith

As a senior research scientist in human-machine interaction at the SEI's Emerging Technology Center, Carol Smith works to further understand how humans and machines can better collaborate to solve important problems and also understand our responsibilities and how that work continues once AI systems are operational. In this podcast Smith discusses a framework that builds upon the importance of diverse teams and ethical standards to ensure that AI systems are trustworthy and able to effectively augment warfighters in the Department of Defense.
View or listen to the podcast.

Integrability by Rick Kazman, Philip Bianco, James Ivers, and John Klein

This report summarizes how to systematically analyze a software architecture with respect to a quality attribute requirement for integrability. The report introduces integrability and common forms of integrability requirements for software architecture. It provides a set of definitions, core concepts, and a framework for reasoning about integrability and satisfaction (or not) of integrability requirements by an architecture and, eventually, a system. It describes a set of mechanisms, such as patterns and tactics, that are commonly used to satisfy integrability requirements. It also provides a method by which an analyst can determine whether an architecture documentation package provides enough information to support analysis and, if so, to determine whether the architectural decisions made contain serious risks relative to integrability requirements. An analyst can use this method to determine whether those requirements, represented as a set of scenarios, have been sufficiently well specified to support the needs of analysis. The reasoning around this quality attribute should allow an analyst, armed with appropriate architectural documentation, to assess the risks inherent in today's architectural decisions, in light of tomorrow's anticipated needs.
Download the report.

Component Mismatches Are a Critical Bottleneck to Fielding AI-Enabled Systems in the Public Sector
by Grace Lewis, Stephany Bellomo, and April Galyardt

The use of machine learning or artificial intelligence (ML/AI) holds substantial potential toward improving many functions and needs of the public sector. In practice, however, integrating ML/AI components into public sector applications is severely limited not only by the fragility of these components and their algorithms, but also because of mismatches between components of ML-enabled systems. For example, if an ML model is trained on data that is different from data in the operational environment, field performance of the ML component will be dramatically reduced. Separate from software engineering considerations, the expertise needed to field an ML/ AI component within a system frequently comes from outside software engineering. As a result, assumptions and even descriptive language used by practitioners from these different disciplines can exacerbate other challenges to integrating ML/AI components into larger systems. We are investigating classes of mismatches in ML/AI systems integration to identify the implicit assumptions made by practitioners in different fields (data scientists, software engineers, operations staff) and find ways to communicate the appropriate information explicitly. We will discuss a few categories of mismatch, and provide examples from each class. To enable ML/AI components to be fielded in a meaningful way, we will need to understand the mismatches that exist and develop practices to mitigate the impacts of these mismatches.
Download the conference paper.

Additional Resources

View the latest SEI research in the SEI Digital Library.
View the latest installments in the SEI Podcast Series.
View the latest installments in the SEI Webinar Series.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed