icon-carat-right menu search cmu-wordmark

Lessons Learned in Coordinated Disclosure for Artificial Intelligence and Machine Learning Systems

White Paper
In this paper, the authors describe lessons learned from coordinating AI and ML vulnerabilities at the SEI's CERT/CC.
Publisher

Software Engineering Institute

DOI (Digital Object Identifier)
10.1184/R1/26867038.v1

Abstract

In this paper, SEI researchers incorporate several lessons learned from the coordination of artificial intelligence (AI) and machine learning (ML) vulnerabilities at the SEI’s CERT Coordination Center (CERT/CC). They also include their observations of public discussions of AI vulnerability coordination cases.

Risk management within the context of AI systems is a rapidly evolving and substantial space. Even when restricted to cybersecurity risk management, AI systems require comprehensive security, such as what the National Institute of Standards and Technology (NIST) describes in The NIST Cybersecurity Framework (CSF).

In this paper, the authors focus on one part of cybersecurity risk management for AI systems: the CERT/CC’s lessons learned from applying the Coordinated Vulnerability Disclosure (CVD) process to reported “vulnerabilities” in AI and ML systems.