icon-carat-right menu search cmu-wordmark

The Latest Work from the SEI: Counter AI, Coordinated Vulnerability Disclosure, and Artificial Intelligence Engineering

Headshot of Bill Scherlis.
CITE

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recent publications from the SEI in the areas of counter artificial intelligence (AI), coordinated vulnerability disclosure for machine learning (ML) and AI, secure development, cybersecurity, and artificial intelligence engineering.

These publications highlight the latest work from SEI technologists in these areas. This post includes a listing of each publication, authors, and links where they can be accessed on the SEI website.

Counter AI: What Is It and What Can You Do About It?
By Nathan M. VanHoudnos, Carol J. Smith, Matt Churilla, Shing-hon Lau, Lauren McIlvenny, and Greg Touhill

As the strategic importance of AI increases, so too does the importance of defending those AI systems. To understand AI defense, it is necessary to understand AI offense—that is, counter AI. This paper describes counter AI. First, we describe the technologies that compose AI systems (the AI Stack) and how those systems are built in a machine learning operations (MLOps) lifecycle. Second, we describe three kinds of counter-AI attacks across the AI Stack and five threat models detailing when those attacks occur within the MLOps lifecycle.

Finally, based on Software Engineering Institute research and practice in counter AI, we give two recommendations. In the long term, the field should invest in AI engineering research that fosters processes, procedures, and mechanisms that reduce the vulnerabilities and weaknesses being introduced into AI systems. In the near term, the field should develop the processes necessary to efficiently respond to and mitigate counter-AI attacks, such as building an AI Security Incident Response Team and extending existing cybersecurity processes like the Computer Security Incident Response Team Services Framework.
Read the SEI white paper.

Lessons Learned in Coordinated Disclosure for Artificial Intelligence and Machine Learning Systems
by Allen D. Householder, Vijay S. Sarvepalli, Jeff Havrilla, Matt Churilla, Lena Pons, Shing-hon Lau, Nathan M. VanHoudnos, Andrew Kompanek, and Lauren McIlvenny

In this paper, SEI researchers incorporate several lessons learned from the coordination of artificial intelligence (AI) and machine learning (ML) vulnerabilities at the SEI’s CERT Coordination Center (CERT/CC). They also include their observations of public discussions of AI vulnerability coordination cases.

Risk management within the context of AI systems is a rapidly evolving and substantial space. Even when restricted to cybersecurity risk management, AI systems require comprehensive security, such as what the National Institute of Standards and Technology (NIST) describes in The NIST Cybersecurity Framework (CSF).

In this paper, the authors focus on one part of cybersecurity risk management for AI systems: the CERT/CC’s lessons learned from applying the Coordinated Vulnerability Disclosure (CVD) process to reported “vulnerabilities” in AI and ML systems.
Read the SEI white paper.

On the Design, Development, and Testing of Modern APIs
by Alejandro Gomez and Alex Vesey

Application programming interfaces (APIs) are a fundamental component of modern software applications; thus, nearly all software engineers are designers or consumers of APIs. From assembly instruction labels that provide reusable code to the powerful web-based application programming interfaces (APIs) of today, APIs enable powerful abstractions by making the system’s operations available to users, while limiting the details of how the APIs are implemented and thus enhancing flexibility of implementation and facilitating update.

APIs provide access to complicated functionality within large codebases worked on by dozens if not hundreds of people, often rotating in and out of projects while simultaneously dealing with changing requirements in an increasingly adversarial environment. Under these conditions, an API must continue to behave as expected; otherwise, calling applications inherit the unintended behavior the API system provides. As systems grow in complexity and size, the need for clear, concise, and usable APIs will remain.

In this context, this white paper addresses the following questions concerning APIs:

  • What is an API?
  • What factors drive API design?
  • What qualities do good APIs exhibit?
  • What specific socio-technical aspects of DevSecOps apply to the development, security, and operational support of APIs?
  • How are APIs tested, from the systems and software security patterns point of view?
  • What cybersecurity and other best practices apply to APIs?

Read the white paper.

Embracing AI: Unlocking Scalability and Transformation Through Generative Text, Imagery, and Synthetic Audio
by Tyler Brooks, Shannon Gallagher, and Dominic A. Ross

The potential of generative artificial intelligence (AI) extends well beyond automation of existing processes, making “digital transformation” a possibility for a rapidly growing set of applications. In this webcast, Tyler Brooks, Shannon Gallagher, and Dominic Ross aim to demystify AI and illustrate its transformative power in achieving scalability, adapting to changing landscapes, and driving digital innovation. The speakers explore practical applications of generative text, imagery, and synthetic audio, with an emphasis on showcasing how these technologies can revolutionize many kinds of workflows.

What attendees will learn:

  • Practical applications of generative text, imagery, and synthetic audio
  • Impact on the scalability of educational content delivery
  • How synthetic audio is transforming AI education

View the webcast.

Evaluating Large Language Models for Cybersecurity Tasks: Challenges and Best Practices
by Jeff Gennari and Samuel J. Perl

How can we effectively use large language models (LLMs) for cybersecurity tasks? In this podcast, Jeff Gennari and Sam Perl discuss applications for LLMs in cybersecurity, potential challenges, and recommendations for evaluating LLMs.
Listen to/view the podcast.

Using Quality Attribute Scenarios for ML Model Test Case Generation
by Rachel Brower-Sinning, Grace Lewis, Sebastián Echeverría, and Ipek Ozkaya

Testing of machine learning (ML) models is a growing challenge for researchers and practitioners alike. Unfortunately, current practice for testing ML models prioritizes testing for model function and performance, while often neglecting the requirements and constraints of the ML-enabled system that integrates the model. This limited view of testing can lead to failures during integration, deployment, and operations, contributing to the difficulties of moving models from development to production. This paper presents an approach based on quality attribute (QA) scenarios to elicit and define system- and model-relevant test cases for ML models. The QA-based approach described in this paper has been integrated into MLTE, a process and tool to support ML model test and evaluation. Feedback from users of MLTE highlights its effectiveness in testing beyond model performance and identifying failures early in the development process.
Read the conference paper.

Additional Resources

View the latest SEI research in the SEI Digital Library.
View the latest podcasts in the SEI Podcast Series.
View the latest installments in the SEI Webcast Series.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed