icon-carat-right menu search cmu-wordmark

The Essential Role of AISRT in Flaw and Vulnerability Management

The rapid expansion of artificial intelligence (AI) in recent years introduced a new wave of security challenges. The SEI’s initial examinations of these issues revealed flaws and vulnerabilities at levels above and beyond those of traditional software. Some newsworthy vulnerabilities that came to light that year, such as the guardrail bypass to produce dangerous content, demonstrated the need for timely action and a dedicated approach to AI security.

The SEI’s CERT Division has long been at the forefront of enhancing the security and resilience of emerging technologies. In response to the growing risks in AI, it took a significant step forward by establishing the first Artificial Intelligence Security Incident Response Team (AISIRT) in November 2023. The AISIRT was created to identify, analyze, and respond to AI-related incidents, flaws, and vulnerabilities—particularly in systems critical to defense and national security.

Since then, we have encountered a growing set of critical issues and emerging attack methods, such as guardrail bypass (jailbreaking), data poisoning, and model inversion. The increasing volume of AI security issues puts consumers, businesses, and national security at risk. Given our long-standing expertise in coordinating vulnerability disclosure across various technologies, expanding this effort to AI and AI-enabled systems was a natural fit. The scope and urgency of the problem now demand the same level of action that has proven effective in other domains. We recently collaborated with 33 experts across academia, industry, and government to emphasize the pressing need for better coordination in managing AI flaws and vulnerabilities.

In this blog post, we provide background on AISIRT and what we have been doing over the last year, specifically in regard to coordination of flaws and vulnerabilities in AI systems. As AISIRT evolves, we will continue to update you on our efforts across multiple fronts, including community-reported AI incidents, advancement in the AI security body of knowledge, and recommendations for improvement to AI and to AI-enabled systems.

What Is AISIRT?

AISIRT at the SEI focuses on advancing the state of the art in AI security in emerging areas such as coordinating the disclosure of vulnerabilities and flaws in AI systems, AI assurance, AI digital forensics and incident response, and AI red-teaming.

AISIRT’s initial objective is understanding and mitigating AI incidents, vulnerabilities, and flaws, especially in defense and national security systems. As we highlighted in our 2024 RSA Conference talk, these vulnerabilities and flaws extend beyond traditional cybersecurity issues to include adversarial machine learning threats and joint cyber-AI attacks. To address these challenges, we collaborate closely with researchers at Carnegie Mellon University and SEI teams that focus on AI engineering, software architecture and cybersecurity principles. This collaboration extends to our vast coordination network of approximately 5,400 industry partners, including 4,400 vendors and 1,000 security researchers, as well as various government organizations.

The AISIRT’s coordination efforts builds on the longstanding work of the SEI’s CERT Division in handling the entire lifecycle of vulnerabilities—particularly through coordinated vulnerability disclosure (CVD). CVD is a structured process for gathering information about vulnerabilities, facilitating communication among relevant stakeholders, and ensuring responsible disclosure along with mitigation strategies. AISIRT extends this approach to what may be considered as AI-specific flaws and vulnerabilities by integrating them into the CERT/CC Vulnerability Notes Database, which provides technical details, impact assessments, and mitigation guidance for known software and AI-related flaws and vulnerabilities.

Beyond vulnerability coordination, the SEI has spent over two decades assisting organizations in establishing and managing Computer Security Incident Response Teams (CSIRTs), helping to prevent and respond to cyber incidents. To date, the SEI has supported the creation of 22 CSIRTs worldwide. AISIRT builds upon this expertise while approaching the novel security risks and complexities of AI systems, thus also maturing and enabling CSIRTs to secure such nascent technologies in their framework.

Since its establishment in November 2023, AISIRT has received over 103 community-reported AI vulnerabilities and flaws. After thorough analysis, 12 of these cases met the criteria for CVD. We have published six vulnerability notes detailing findings and mitigations, marking a critical step in documenting and formalizing AI vulnerability and flaw coordination.

Activities at the Emerging AISIRT

In a recent SEI podcast, we explored why AI security incident response teams are necessary, highlighting the complexity of AI systems, their supply chains, and the emergence of new vulnerabilities across the AI stack (encompassing software frameworks, cloud platforms, and interfaces). Unlike traditional software, the AI stack consists of multiple interconnected layers, each introducing unique security risks. As outlined in a recent SEI white paper, these layers include:

  • computing and devices—the foundational technologies, including programming languages, operating systems, and hardware that support AI systems with their unique usage of GPUs and their API interfaces.
  • massive data management—the processes of selecting, analyzing, preparing, and managing data used in AI training and operations, which includes training data, models, metadata and their ephemeral attributes.
  • machine learning—encompasses supervised, unsupervised, and reinforcement learning approaches that provide a natively probabilistic algorithms essential to such methods.
  • modeling—the structuring of knowledge to synthesize raw data into higher-order concepts that essentially combines data and its processing code in complex ways.
  • decision support—how AI models contribute to decision-making processes in adaptive and dynamic ways.
  • planning and acting—the collaboration between AI systems and humans to create and execute plans, providing predictions and driving actionable options.
  • autonomy and human/AI interaction—the spectrum of engagement where humans delegate actions to AI, including AI providing autonomous decision support.

Each layer presents potential flaws and vulnerabilities, making AI security inherently complex. Here are three examples from the numerous AI-specific flaws and vulnerabilities that AISIRT has coordinated, along with their outcomes:

  • guardrail bypass vulnerability: After a user reported a large language model (LLM) guardrail bypass vulnerability, AISIRT engaged OpenAI to address the issue. Working with ChatGPT developers, we ensured mitigation measures were put in place, particularly to prevent time-based jailbreak attacks.
  • GPU API vulnerability: AI systems rely on specialized hardware with specific application program interfaces (API) and software development kits (SDK), which introduces unique risks. For instance, the LeftoverLocals vulnerability allowed attackers to use a GPU-specific API to exploit memory leaks to extract LLM responses, potentially exposing sensitive information. AISIRT worked with stakeholders, leading to an update in the Khronos standard to mitigate future risks in GPU memory management.
  • command injection vulnerability: These vulnerabilities, a subset of prompt injection vulnerabilities, primarily target AI environments that accept user inputs in the form of chatbots or AI agents. A malicious user can take advantage of the chat prompt to inject malicious code or other unwanted commands, which can compromise the AI environment or even the entire system. One such vulnerability was reported to AISIRT by security researchers at Nvidia. AISIRT collaborated with the vendor to implement security measures through policy updates and the use of appropriate sandbox environments to protect against such threats.

Multi-Party Coordination Is Essential in AI

The complex AI supply chain and the transferability of flaws and vulnerabilities across vendor models demand coordinated, multi-party efforts, called multi-party CVD (MPCVD). Addressing AI flaws and vulnerabilities using MPCVD has further shown that the coordination requires engaging not just AI vendors, but also key entities in the AI supply chain, such as

  • data providers and curators
  • open source libraries and frameworks
  • model hubs and distribution platforms
  • third-party AI distributors

A robust AISIRT plays a critical role in navigating these complexities, ensuring flaws and vulnerabilities are effectively identified, analyzed, and mitigated across the AI ecosystem.

AISIRT’s Coordination Workflow and How You Can Contribute

Currently, AISIRT receives flaw and vulnerability reports from the community through the CERT/CC’s web-based platform for software vulnerability reporting and coordination, known as the Vulnerability Information and Coordination Environment (VINCE). The VINCE reporting process captures the AI Flaw Report Card, ensuring that key information—such as the nature of the flaw, impacted systems, and potential mitigations—is captured for effective coordination.

AISIRT is actively shaping the future of AI security, but we cannot do it alone. We invite you to join us in this mission, bringing your expertise to work alongside AISIRT and security professionals worldwide. Whether you’re a vendor, security researcher, model provider, or service operator, your participation in coordinated flaw and vulnerability disclosure strengthens AI security and drives the maturity needed to protect these evolving technologies. AI-enabled software cannot be considered secure until it undergoes robust CVD practices, just as we have seen in traditional software security.

Join us in building a more secure AI ecosystem. Report vulnerabilities, collaborate on fixes, and help shape the future of AI security. Whether you are building an AISIRT or augmenting your AI security needs with us through VINCE, the SEI is here to partner with you.

Written By

Lauren McIlvenny

Author Page
Send a Message

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed