icon-carat-right menu search cmu-wordmark

New Research Supports SEI AISIRT Mission to Secure Artificial Intelligence

New Research Supports SEI AISIRT Mission to Secure Artificial Intelligence
Article

March 26, 2025—The burgeoning general-purpose artificial intelligence (GPAI) ecosystem needs to adapt lessons from the software security domain to keep AI systems secure, according to a newly released paper by representatives from academia, industry, and the Software Engineering Institute (SEI). Lauren McIlvenny, the technical director of threat analysis in the SEI’s CERT Division, was a coauthor on the paper. McIlvenny oversees the SEI’s AI Security Incident Response Team (AISIRT), which was created in 2023 to identify, analyze, and respond to AI-related incidents, flaws, and vulnerabilities, particularly in systems critical to defense and national security.

The paper In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI raises the alarm about GPAI safety and the need for coordinated vulnerability disclosure (CVD). The paper notes that the past year’s proliferation of GPAI systems, which it defines as “foundation model-based software systems, with a wide variety of uses,” has outpaced the “infrastructure, practices, and norms for reporting flaws.” The authors propose standardized GPAI flaw reporting, the adoption of safe-harbor disclosure programs by GPAI system providers, and improved disclosure infrastructure.

Contributing organizations include commercial software vendors and tech companies like Microsoft and Google, leading academic institutions, and technology policy research organizations. McIlvenny, supported by CERT vulnerability researchers, bridged the AI and cybersecurity perspectives to explain how cybersecurity best practices could be extended to AI and AI-enabled systems. “This paper was backed by more than 30 respected technology, policy, and security experts,” McIlvenny said. “They all recognized how critical CVD for GPAI is for consumers, the commercial sector, and national security. Their collaboration sends a signal that the community recognizes the importance of the challenge and is ready to make meaningful progress.”

McIlvenny noted that the type of recommendations found in the paper should extend beyond GPAI to AI and machine learning in general. Widespread security gaps in the broader AI technological domains were what motivated the SEI to establish the AISIRT in late 2023.

The AISIRT was built on the SEI CERT/CC’s capabilities for software cybersecurity. For more than a year, the AISIRT has been taking in community reports on vulnerabilities in AI and machine learning systems, coordinating the response with vendors and other stakeholders, and disclosing the vulnerabilities and mitigations to the public. McIlvenny hopes the AISIRT can serve as a model for others looking to increase AI CVD capacity.

“The SEI has experience standing up security incident response centers, providing multiparty CVD, and doing cutting-edge AI and machine learning research,” she said. “We stand ready to help anyone who needs assistance coordinating AI flaws and vulnerabilities or wants to establish their own AISIRT.”

Learn more about the AISIRT’s activities in a new SEI Blog post by McIlvenny and Vijay Sarvepalli, “The Essential Role of AISIRT in Flaw and Vulnerability Management” and on the AISIRT home page. Report flaws and vulnerabilities in AI systems to the AISIRT at https://kb.cert.org/vuls/report. Contact the SEI at info@sei.cmu.edu.