search menu icon-carat-right cmu-wordmark

The Top 10 Blog Posts of 2022

Every January on the SEI Blog, we present the 10-most visited posts of the previous year. This year’s list of top 10 posts highlights our work in deepfakes, artificial intelligence, machine learning, DevSecOps, and zero trust. Posts, which were published between January 1, 2022 and December 31, 2022, are presented below in reverse order based on the number of visits.

#10 Probably Don’t Rely on EPSS Yet
by Jonathan Spring

Vulnerability management involves discovering, analyzing, and handling new or reported security vulnerabilities in information systems. The services provided by vulnerability management systems are essential to both computer and network security. This blog post evaluates the pros and cons of the Exploit Prediction Scoring System (EPSS), which is a data-driven model designed to estimate the probability that software vulnerabilities will be exploited in practice.

The EPSS model was initiated in 2019 in parallel with our criticisms of the Common Vulnerability Scoring System (CVSS) in 2018. EPSS was developed in parallel with our own attempt at improving CVSS, the Stakeholder-Specific Vulnerability Categorization (SSVC); 2019 also saw version 1 of SSVC. This post will focus on EPSS version 2, released in February 2022, and when it is and is not appropriate to use the model. This latest release has created a lot of excitement around EPSS, especially since improvements to CVSS (version 4) are still being developed. Unfortunately, the applicability of EPSS is much narrower than people might expect. This post will provide my advice on how practitioners should and should not use EPSS in its current form.
Read the post in its entirety.

#9 Containerization at the Edge
by Kevin Pitstick and Jacob Ratzlaff

Containerization is a technology that addresses many of the challenges of operating software systems at the edge. Containerization is a virtualization method where an application’s software files (including code, dependencies, and configuration files) are bundled into a package and executed on a host by a container runtime engine. The package is called a container image, which then becomes a container when it is executed. While similar to virtual machines (VMs), containers do not virtualize the operating system kernel (usually Linux) and instead use the host’s kernel. This approach removes some of the resource overhead associated with virtualization, though it makes containers less isolated and portable than virtual machines.

While the concept of containerization has existed since Unix’s chroot system was introduced in 1979, it has escalated in popularity over the past several years after Docker was introduced in 2013. Containers are now widely used across all areas of software and are instrumental in many projects’ continuous integration/continuous delivery (CI/CD) pipelines. In this blog post, we discuss the benefits and challenges of using containerization at the edge. This discussion can help software architects analyze tradeoffs while designing software systems for the edge.
Read the post in its entirety.

#8 Tactics and Patterns for Software Robustness
by Rick Kazman

Robustness has traditionally been thought of as the ability of a software-reliant system to keep working, consistent with its specifications, despite the presence of internal failures, faulty inputs, or external stresses, over a long period of time. Robustness, along with other quality attributes, such as security and safety, is a key contributor to our trust that a system will perform in a reliable manner. In addition, the notion of robustness has more recently come to encompass a system’s ability to withstand changes in its stimuli and environment without compromising its essential structure and characteristics. In this latter notion of robustness, systems should be malleable, not brittle, with respect to changes in their stimuli or environments. Robustness, consequently, is a highly important quality attribute to design into a system from its inception because it is unlikely that any nontrivial system could achieve this quality without conscientious and deliberate engineering. In this blog post, which is excerpted and adapted from a recently published technical report, we will explore robustness and introduce tactics and patterns for understanding and achieving robustness.
Read the post in its entirety.
View a podcast on this work.

#7 The Zero Trust Journey: 4 Phases of Implementation
by Timothy Morrow and Matthew Nicolai

Over the past several years, zero trust architecture has emerged as an important topic within the field of cybersecurity. Heightened federal requirements and pandemic-related challenges have accelerated the timeline for zero trust adoption within the federal sector. Private sector organizations are also looking to adopt zero trust to bring their technical infrastructure and processes in line with cybersecurity best practices. Real-world preparation for zero trust, however, has not caught up with existing cybersecurity frameworks and literature. NIST standards have defined the desired outcomes for zero trust transformation, but the implementation process is still relatively undefined. Zero trust cannot be simply implemented through off-the-shelf solutions since it requires a comprehensive shift towards proactive security and continuous monitoring. In this post, we outline the zero trust journey, discussing four phases that organizations should address as they develop and assess their roadmap and associated artifacts against a zero trust maturity model.

Overview of the Zero Trust Journey

As the nation’s first federally funded research and development center with a clear emphasis on cybersecurity, the SEI is uniquely positioned to bridge the gap between NIST standards and real-world implementation. As organizations move away from the perimeter security model, many are experiencing uncertainty in their search for a clear path towards adopting zero trust. Zero trust is an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources. The CERT Division at the Software Engineering Institute has outlined several steps that organizations can take to implement and maintain zero trust architecture, which uses zero trust principles to plan industrial and enterprise infrastructure and workflows. These steps collectively form the basis of the zero trust journey.
Read the post in its entirety.
View a podcast on this work.

#6 Two Categories of Architecture Patterns for Deployability
by Rick Kazman

Competitive pressures in many domains, as well as development paradigms such as Agile and DevSecOps, have led to the increasingly common practice of continuous delivery or continuous deployment—rapid and frequent changes and updates to software systems. In today’s systems, releases can occur at any time—possibly hundreds of releases per day—and each can be instigated by a different team within an organization. Being able to release frequently means that bug fixes and security patches do not have to wait until the next scheduled release, but rather can be made and released as soon as a bug is discovered and fixed. It also means that new features need not be bundled into a release but can be put into production at any time. In this blog post, excerpted from the fourth edition of Software Architecture in Practice, which I coauthored with Len Bass and Paul Clements, I discuss the quality attribute of deployability and describe two associated categories of architecture patterns: patterns for structuring services and for how to deploy services.

Continuous deployment is not desirable, or even possible, in all domains. If your software exists in a complex ecosystem with many dependencies, it may not be possible to release just one part of it without coordinating that release with the other parts. In addition, many embedded systems, systems residing in hard-to-access locations, and systems that are not networked would be poor candidates for a continuous deployment mindset.

This post focuses on the large and growing numbers of systems for which just-in-time feature releases are a significant competitive advantage, and just-in-time bug fixes are essential to safety or security or continuous operation. Often these systems are microservice and cloud-based, although the techniques described here are not limited to those technologies.
Read the post in its entirety.
View an SEI podcast on this topic.

#5 A Case Study in Applying Digital Engineering
by Nataliya Shevchenko and Peter Capell

A longstanding challenge in large software-reliant systems has been to provide system stakeholders with visibility into the status of systems as they are being developed. Such information is not always easy for senior executives and others in the engineering path to acquire when needed. In this blog post, we present a case study of an SEI project in which digital engineering is being used successfully to provide visibility of products under development from inception in a requirement to delivery on a platform.

One of the standard conventions for communicating about the state of an acquisition program is the program management review (PMR). Due to the accumulation of detail presented in a typical PMR, it can be hard to identify tasks that are most urgently in need of intervention. The promise of modern technology, however, is that a computer can augment human capacity to identify counterintuitive aspects of a program, effectively increasing its accuracy and quality. Digital engineering is a technology that can

  • increase the visibility of what is most urgent and important
  • identify how changes that are introduced affect a whole system, as well as parts of it
  • enable stakeholders of a system to retrieve timely information about the status of a product moving through the development lifecycle at any point in time

Read the post in its entirety.

#4 A Hitchhiker’s Guide to ML Training Infrastructure
by Jay Palat

Hardware has made a huge impact on the field of machine learning (ML). Many of the ideas we use today were published decades ago, but the cost to run them and the data necessary were too expensive, making them impractical. Recent advances, including the introduction of graphics processing units (GPUs), are making some of those ideas a reality. In this post we’ll look at some of the hardware factors that impact training artificial intelligence (AI) systems, and we’ll walk through an example ML workflow.

Why is Hardware Important for Machine Learning?

Hardware is a key enabler for machine learning. Sara Hooker, in her 2020 paper “The Hardware Lottery” details the emergence of deep learning from the introduction of GPUs. Hooker’s paper tells the story of the historical separation of hardware and software communities and the costs of advancing each field in isolation: that many software ideas (especially ML) have been abandoned because of hardware limitations. GPUs enable researchers to overcome many of those limitations because of their effectiveness for ML model training.
Read the post in its entirety.

#3 A Technical DevSecOps Adoption Framework
by Vanessa Jackson and Lyndsi Hughes

DevSecOps practices, including continuous-integration/continuous-delivery (CI/CD) pipelines, enable organizations to respond to security and reliability events quickly and efficiently and to produce resilient and secure software on a predictable schedule and budget. Despite growing evidence and recognition of the efficacy and value of these practices, the initial implementation and ongoing improvement of the methodology can be challenging. This blog post describes our new DevSecOps adoption framework that guides you and your organization in the planning and implementation of a roadmap to functional CI/CD pipeline capabilities. We also provide insight into the nuanced differences between an infrastructure team focused on implementing a DevSecOps paradigm and a software-development team.

A previous post presented our case for the value of CI/CD pipeline capabilities and we introduced our framework at a high level, outlining how it helps set priorities during initial deployment of a development environment capable of executing CI/CD pipelines and leveraging DevSecOps practices.
Read the post in its entirety.

#2 What is Explainable AI?
by Violet Turri

Consider a production line in which workers run heavy, potentially dangerous equipment to manufacture steel tubing. Company executives hire a team of machine learning (ML) practitioners to develop an artificial intelligence (AI) model that can assist the frontline workers in making safe decisions, with the hopes that this model will revolutionize their business by improving worker efficiency and safety. After an expensive development process, manufacturers unveil their complex, high-accuracy model to the production line expecting to see their investment pay off. Instead, they see extremely limited adoption by their workers. What went wrong?

This hypothetical example, adapted from a real-world case study in McKinsey’s The State of AI in 2020, demonstrates the crucial role that explainability plays in the world of AI. While the model in the example may have been safe and accurate, the target users did not trust the AI system because they didn’t know how it made decisions. End-users deserve to understand the underlying decision-making processes of the systems they are expected to employ, especially in high-stakes situations. Perhaps unsurprisingly, McKinsey found that improving the explainability of systems led to increased technology adoption.

Explainable artificial intelligence (XAI) is a powerful tool in answering critical How? and Why? questions about AI systems and can be used to address rising ethical and legal concerns. As a result, AI researchers have identified XAI as a necessary feature of trustworthy AI, and explainability has experienced a recent surge in attention. However, despite the growing interest in XAI research and the demand for explainability across disparate domains, XAI still suffers from a number of limitations. This blog post presents an introduction to the current state of XAI, including the strengths and weaknesses of this practice.
Read the post in its entirety.
View an SEI Podcast on this topic.

#1 How Easy is it to Make and Detect a Deepfake?
by Catherine A Bernaciak and Dominic Ross

A deepfake is a media file—image, video, or speech, typically representing a human subject—that has been altered deceptively using deep neural networks (DNNs) to alter a person’s identity. This alteration typically takes the form of a “faceswap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. A report published this year estimated that there were more than 85,000 harmful deepfake videos detected up to December 2020, with the number doubling every six months since observations began in December 2018.

Determining the authenticity of video content can be an urgent priority when a video pertains to national-security concerns. Evolutionary improvements in video-generation methods are enabling relatively low-budget adversaries to use off-the-shelf machine-learning software to generate fake content with increasing scale and realism. The House Intelligence Committee discussed at length the rising risks presented by deepfakes in a public hearing on June 13, 2019. In this blog post, we describe the technology underlying the creation and detection of deepfakes and assess current and future threat levels.

The large volume of online video presents an opportunity for the United States government to enhance its situational awareness on a global scale. As of February 2020, Internet users were uploading an average of 500 hours of new video content per minute on YouTube alone. However, the existence of a wide range of video-manipulation tools means that video discovered online can’t always be trusted. What’s more, as the idea of deepfakes has gained visibility in popular media, the press, and social media, a parallel threat has emerged from the so-called liar’s dividend—challenging the authenticity or veracity of legitimate information through a false claim that something is a deepfake even if it isn’t.
Read the post in its entirety.
View the webcast on this work.

Looking Ahead in 2023

We publish a new post on the SEI Blog every Monday morning. In the coming months, look for posts highlighting the SEI’s work in artificial intelligence, digital engineering, and edge computing.

Additional Resources

Download the latest publications from SEI researchers at our digital library.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed