Robust and Secure AI
• White Paper
Software Engineering Institute
Robust and secure AI systems are AI systems that reliably operate at expected levels of performance, even when faced with uncertainty and in the presence of danger or threat. These systems have built-in structures, mechanisms, or mitigations to prevent, avoid, or provide resilience to dangers from a particular threat model. We identify three specific areas of focus to advance Robust and Secure AI for defense and national security:
- Improving the robustness of AI components and systems
- Designing for security challenges in modern AI systems
- Developing processes and tools for testing, evaluating, and analyzing AI systems
For each area, we identify ongoing work as well as challenges and opportunities in developing and deploying AI systems with confidence.