The Top 10 Blog Posts of 2023
PUBLISHED IN
Software Engineering Research and DevelopmentEvery January on the SEI Blog, we present the 10 most-visited posts of the previous year. This year’s top 10 highlights our work in quantum computing, software modeling, large language models, DevSecOps, and artificial intelligence. The posts, which were published between January 1, 2023, and December 31, 2023, are presented below in reverse order based on the number of visits.
#10 Contextualizing End-User Needs: How to Measure the Trustworthiness of an AI System
by Carrie Gardner, Katherine-Marie Robinson, Carol J. Smith, and Alexandrea Steiner
As potential applications of artificial intelligence (AI) continue to expand, the question remains: will users want the technology and trust it? How can innovators design AI-enabled products, services, and capabilities that are successfully adopted, rather than discarded because the system fails to meet operational requirements, such as end-user confidence? AI’s promise is bound to perceptions of its trustworthiness.
To spotlight a few real-world scenarios, consider:
- How does a software engineer gauge the trustworthiness of automated code generation tools to co-write functional, quality code?
- How does a doctor gauge the trustworthiness of predictive healthcare applications to co-diagnose patient conditions?
- How does a warfighter gauge the trustworthiness of computer-vision enabled threat intelligence to co-detect adversaries?
What happens when users don’t trust these systems? AI’s ability to successfully partner with the software engineer, doctor, or warfighter in these circumstances depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver.
This blog post explores leading research and lessons learned to advance discussion of how to measure the trustworthiness of AI so warfighters and end users in general can realize the promised outcomes.
Read the post in its entirety.
#9 5 Best Practices from Industry for Implementing a Zero Trust Architecture
by Matthew Nicolai, Nathaniel Richmond, and Timothy Morrow
Zero trust (ZT) architecture (ZTA) has the potential to improve an enterprise’s security posture. There is still considerable uncertainty about the ZT transformation process, however, as well as how ZTA will ultimately appear in practice. Recent executive orders M-22-009 and M-21-31 have accelerated the timeline for zero trust adoption in the federal sector, and many private sector organizations are following suit. In response to these executive orders, researchers at the SEI’s CERT Division hosted Zero Trust Industry Days in August 2022 to enable industry stakeholders to share information about implementing ZT.
In this blog post, which we adapted from a white paper, we detail five ZT best practices identified during the two-day event, discuss why they are significant, and provide SEI commentary and analysis on ways to empower your organization’s ZT transformation.
Read the post in its entirety.
#8 The Challenge of Adversarial Machine Learning
by Matt Churilla, Nathan M. VanHoudnos, and Robert W. Beveridge
Imagine riding to work in your self-driving car. As you approach a stop sign, instead of stopping, the car speeds up and goes through the stop sign because it interprets the stop sign as a speed limit sign. How did this happen? Even though the car’s machine learning (ML) system was trained to recognize stop signs, someone added stickers to the stop sign, which fooled the car into thinking it was a 45-mph speed limit sign. This simple act of putting stickers on a stop sign is one example of an adversarial attack on ML systems.
In this SEI Blog post, I examine how ML systems can be subverted and, in this context, explain the concept of adversarial machine learning. I also examine the motivations of adversaries and what researchers are doing to mitigate their attacks. Finally, I introduce a basic taxonomy delineating the ways in which an ML model can be influenced and show how this taxonomy can be used to inform models that are robust against adversarial actions.
Read the post in its entirety.
#7 Play it Again Sam! or How I Learned to Love Large Language Models
by Jay Palat
“AI will not replace you. A person using AI will.”
-Santiago @svpino
In our work as advisors in software and AI engineering, we are often asked about the efficacy of AI code assistant tools like Copilot, GhostWriter, or Tabnine based on large language model (LLM). Recent innovation in the building and curation of LLMs demonstrates powerful tools for the manipulation of text. By finding patterns in large bodies of text, these models can predict the next word to write sentences and paragraphs of coherent content. The concern surrounding these tools is strong – from New York schools banning the use of ChatGPT to Stack Overflow and Reddit banning answers and art generated from LLMs. While many applications are strictly limited to writing text, a few applications explore the patterns to work on code, as well. The hype surrounding these applications ranges from adoration (“I’ve rebuilt my workflow around these tools”) to fear, uncertainty, and doubt (“LLMs are going to take my job”). In the Communications of the ACM, Matt Welsh goes so far as to declare we’ve reached “The End of Programming.” While integrated development environments have had code generation and automation tools for years, in this post I will explore what new advancements in AI and LLMs mean for software development.
Read the post in its entirety.
#6 How to Use Docker and NS-3 to Create Realistic Network Simulations
by Alejandro Gomez
Sometimes, researchers and developers need to simulate various types of networks with software that would otherwise be hard to do with real devices. For example, some hardware can be hard to get, expensive to set up, or beyond the skills of the team to implement. When the underlying hardware is not a concern but the essential functions that it does is, software can be a viable alternative.
NS-3 is a mature, open-source networking simulation library with contributions from the Lawrence Livermore National Laboratory , Google Summer of Code, and others. It has a high degree of capability to simulate various kinds of networks and user-end devices, and its Python-to-C++ bindings make it accessible for many developers.
In some cases, however, it's not sufficient to simulate a network. A simulator might need to test how data behaves in a simulated network (i.e., testing the integrity of User Datagram Protocol (UDP) traffic in a Wi-Fi network, how 5G data propagates across cell towers and user devices, etc. NS-3 allows such kinds of simulations by piping data from tap interfaces (a feature of virtual network devices provided by the Linux kernel that pass ethernet frames to and from user space) into the running simulation.
This blog post presents a tutorial on how you can transmit live data through an NS-3-simulated network with the added advantage of having the data-producing/data-receiving nodes be Docker containers. Finally, we use Docker Compose to automate complex setups and make repeatable simulations in seconds.
Read the post in its entirety.
#5 5 Challenges to Implementing DevSecOps and How to Overcome Them
by Joe Yankel and Hasan Yasar
Historically, software security has been addressed at the project level, emphasizing code scanning, penetration testing, and reactive approaches for incident response. Recently, however, the discussion has shifted to the program level to align security with business objectives. The ideal outcome of such a shift is one in which software development teams act in alignment with business goals, organizational risk, and solution architectures, and these teams understand that security practices are integral to business success. DevSecOps, which builds on DevOps principles and places additional focus on security activities throughout all phases of the software development lifecycle (SDLC), can help organizations realize this ideal state. However, the shift from project- to program-level thinking raises numerous challenges. In our experience, we’ve observed five common challenges to implementing DevSecOps. This SEI Blog post articulates these challenges and provides actions organizations can take to overcome them.
Read the post in its entirety.
#4 Application of Large Language Models (LLMs) in Software Engineering: Overblown Hype or Disruptive Change?
by Ipek Ozkaya, Anita Carleton, John E. Robert, and Douglas Schmidt (Vanderbilt University)
Has the day finally arrived when large language models (LLMs) turn us all into better software engineers? Or are LLMs creating more hype than functionality for software development, and, at the same time, plunging everyone into a world where it is hard to distinguish the perfectly formed, yet sometimes fake and incorrect, code generated by artificial intelligence (AI) programs from verified and well-tested systems?
This blog post, which builds on ideas introduced in the IEEE paper Application of Large Language Models to Software Engineering Tasks: Opportunities, Risks, and Implications by Ipek Ozkaya, focuses on opportunities and cautions for LLMs in software development, the implications of incorporating LLMs into software-reliant systems, and the areas where more research and innovations are needed to advance their use in software engineering.
Read the post in its entirety.
#3 Rust Vulnerability Analysis and Maturity Challenges
by Garret Wassermann and David Svoboda
While the memory safety and security features of the Rust programming language can be effective in many situations, Rust’s compiler is very particular on what constitutes good software design practices. Whenever design assumptions disagree with real-world data and assumptions, there is the possibility of security vulnerabilities–and malicious software that can take advantage of those vulnerabilities. In this post, we will focus on users of Rust programs, rather than Rust developers. We will explore some tools for understanding vulnerabilities whether the original source code is available or not. These tools are important for understanding malicious software where source code is often unavailable, as well as commenting on possible directions in which tools and automated code analysis can improve. We also comment on the maturity of the Rust software ecosystem as a whole and how that might impact future security responses, including via the coordinated vulnerability disclosure methods advocated by the SEI’s CERT Coordination Center (CERT/CC). This post is the second in a series exploring the Rust programming language. The first post explored security issues with Rust.
Read the post in its entirety.
#2 Software Modeling: What to Model and Why
by John McGregor and Sholom G. Cohen
Model-based systems engineering (MBSE) environments are intended to support engineering activities of all stakeholders across the envisioning, developing, and sustaining phases of software-intensive products. Models, the machine-manipulable representations and the products of an MBSE environment, support efforts such as the automation of standardized analysis techniques by all stakeholders and the maintenance of a single authoritative source of truth about product information. The model faithfully represents the final product in those attributes of interest to various stakeholders. The result is an overall reduction of development risks.
When initially envisioned, the requirements for a product may seem to represent the right product for the stakeholders. During development, however, the as-designed product comes to reflect an understanding of what is really needed that is superior to the original set of requirements. When it is time to integrate components, during an early incremental integration activity or a full product integration, the original set of requirements is no longer represented and is no longer a valid source of test cases. Many questions arise, such as
- How do I evaluate the failure of a test?
- How can I evaluate the completeness of a test set?
- How do I track failures and the fixes applied to them?
- How do I know that fixes applied do not break something else?
Such is the case with requirements, and much the same should be the case for a set of models created during development—are they still representative of the implemented product undergoing integration?
One of the goals for robust design is to have an up-to-date single authoritative source of truth in which discipline-specific views of the system are created using the same model elements at each development step. The single authoritative source will often be a collection of requirement, specification, and design submodels within the product model. The resulting model can be used as a valid source of complete and correct verification and validation (V&V) activities. In this post, we examine the questions above and other questions that arise during development and use the answers to describe modeling and analysis activities.
Read the post in its entirety.
#1 Cybersecurity of Quantum Computing: A New Frontier
by Tom Scanlon
Research and development of quantum computers continues to grow at a rapid pace. The U.S. government alone spent more than $800 million on quantum information science (QIS) research in 2022. The promise of quantum computers is substantial – they will be able to solve certain problems that are classically intractable, meaning a conventional computer cannot complete the calculations within human-usable timescales. Given this computational power, there is growing discussion surrounding the cyber threats quantum computers may pose in the future. For instance, Alejandro Mayorkas, secretary of the Department of Homeland Security, has identified the transition to post-quantum encryption as a priority to ensure cyber resilience. There is very little discussion, however, on how we will protect quantum computers in the future. If quantum computers are to become such valuable assets, it is reasonable to project that they will eventually be the target of malicious activity.
I was recently invited to be a participant in the Workshop on Cybersecurity of Quantum Computing, co-sponsored by the National Science Foundation (NSF) and the White House Office of Science and Technology Policy, where we examined the emerging field of cybersecurity for quantum computing. While quantum computers are still nascent in many ways, it is never too early to address looming cybersecurity concerns. This post will explore issues related to creating the discipline of cyber protection of quantum computing and outline six areas of future research in the field of quantum cybersecurity.
Read the post in its entirety.
Looking Ahead in 2024
We publish a new post on the SEI Blog every Monday morning. In the coming months, look for posts highlighting the SEI’s work in artificial intelligence, cybersecurity, and edge computing.
Additional Resources
Download the latest publications from SEI researchers at our digital library.
More By The Author
More In Software Engineering Research and Development
The Latest Work from the SEI: Counter AI, Coordinated Vulnerability Disclosure, and Artificial Intelligence Engineering
• By Bill Scherlis
PUBLISHED IN
Software Engineering Research and DevelopmentGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedMore In Software Engineering Research and Development
The Latest Work from the SEI: Counter AI, Coordinated Vulnerability Disclosure, and Artificial Intelligence Engineering
• By Bill Scherlis
Get updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed