Archive: 2024
Weaknesses and Vulnerabilities in Modern AI: Integrity, Confidentiality, and Governance
In the rush to develop AI, it is easy to overlook factors that increase risk. This post explores AI risk through the lens of confidentiality, governance, and integrity.
Read More•By Bill Scherlis
In Artificial Intelligence Engineering
Weaknesses and Vulnerabilities in Modern AI: AI Risk, Cyber Risk, and Planning for Test and Evaluation
Modern AI systems pose consequential, poorly understood risks. This blog post explores strategies for framing test and evaluation practices based on a holistic approach to AI risk.
Read More•By Bill Scherlis
In Artificial Intelligence Engineering
Building Quality Software: 4 Engineering-Centric Techniques
Why is it easier to verify the function of a software program rather than its qualities? This post outlines 4 engineering-centric techniques to creating quality software.
Read More•By Alejandro Gomez
In Software Architecture
3 Recommendations for Machine Unlearning Evaluation Challenges
Machine unlearning (MU) aims to develop methods to remove data points efficiently and effectively from a model without the need for extensive retraining. This post details our work to address …
Read More•By Keltin Grimes, Collin Abidi, Cole Frank, Shannon Gallagher
In Artificial Intelligence Engineering
Acquisition Archetypes Seen in the Wild, DevSecOps Edition: Cross-Program Dependencies
Shared capabilities can help manage costs and complexities but can also result in cross-program dependencies. This post examines this phenomenon in a DevSecOps context.
Read More•By William E. Novak
In DevSecOps
Generative AI and Software Engineering Education
Educators have had to adapt to rapid developments in generative AI to provide a realistic perspective to their students. In this post, experts discuss generative AI and software engineering education.
Read More•By Ipek Ozkaya, Douglas Schmidt (Vanderbilt University)
In Artificial Intelligence Engineering
The Latest Work from the SEI: Counter AI, Coordinated Vulnerability Disclosure, and Artificial Intelligence Engineering
This post highlights the latest work from the SEI in artificial intelligence engineering, coordinated vulnerability disclosure for machine learning, and counter AI.
Read More•By Bill Scherlis
In Software Engineering Research and Development
A Roadmap for Incorporating Positive Deterrence in Insider Risk Management
Positive deterrence reduces insider risk through workforce practices that promote the mutual interests of employees and their organization.
Read More•By Andrew P. Moore
In Insider Threat
Measuring AI Accuracy with the AI Robustness (AIR) Tool
Understanding your artificial intelligence (AI) system’s predictions can be challenging. In this post, SEI researchers discuss a new tool to help improve AI classifier performance.
Read More•By Michael D. Konrad, Nicholas Testa, Linda Parker Gates, Crisanne Nolan, David James Shepard, Julie B. Cohen, Andrew O. Mellinger, Suzanne Miller, Melissa Ludwick
In Artificial Intelligence Engineering
Evaluating Static Analysis Alerts with LLMs
LLMs show promising initial results in adjudicating static analysis alerts, offering possibilities for better vulnerability detection. This post discusses initial experiments using GPT-4 to evaluate static analysis alerts.
Read More•By William Klieber, Lori Flynn
In Cybersecurity Engineering
SEI Blog Archive
Recent
Year