icon-carat-right menu search cmu-wordmark

The Latest Work from the SEI: Penetration Testing, Artificial Intelligence, and Incident Management

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, conference papers, and webcasts highlighting our work in penetration testing, designing trustworthy AI, fielding AI-enabled systems in the public sector, incident management, machine learning in cybersecurity, and cyber hygiene. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

Penetration Tests Are The Check Engine Light On Your Security Operations by Allen D. Householder, Dan J. Klinedinst

Penetration testing is a way of testing your security controls against realistic attacks. However, it assumes that you have a known set of controls to test. Just as you wouldn't build a vehicle maintenance plan based on the check engine light alone, it's suboptimal to start improving network security operations with a penetration test.
Download the white paper.

Designing Trustworthy AI by Carol J. Smith

Diverse teams are needed to build trustworthy artificial intelligence systems, and those teams need to coalesce around a shared set of ethics. There are many discussions in the AI field about ethics and trust, but there are few frameworks available for people to use as guidance when creating these systems. The Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences described in this paper, when used with a set of technical ethics, will guide AI development teams to create AI systems that are accountable, de-risked, respectful, secure, honest, and usable.
Download the conference paper.

Component Mismatches Are a Critical Bottleneck to Fielding AI-Enabled Systems in the Public Sector by Grace Lewis, Stephany Bellomo, April Galyardt

The use of machine learning or artificial intelligence (ML/AI) holds substantial potential toward improving many functions and needs of the public sector. In practice however, integrating ML/AI components into public sector applications is severely limited not only by the fragility of these components and their algorithms, but also because of mismatches between components of ML-enabled systems. For example, if an ML model is trained on data that is different from data in the operational environment, field performance of the ML component will be dramatically reduced. Separate from software engineering considerations, the expertise needed to field an ML/ AI component within a system frequently comes from outside software engineering. As a result, assumptions and even descriptive language used by practitioners from these different disciplines can exacerbate other challenges to integrating ML/AI components into larger systems. We are investigating classes of mismatches in ML/AI systems integration to identify the implicit assumptions made by practitioners in different fields (data scientists, software engineers, operations staff) and find ways to communicate the appropriate information explicitly. We will discuss a few categories of mismatch, and provide examples from each class. To enable ML/AI components to be fielded in a meaningful way, we will need to understand the mismatches that exist and develop practices to mitigate the impacts of these mismatches.
Download the conference paper.

Benchmarking Organizational Incident Management Practices by Robin Ruefle, Mark Zajicek

Successful management of incidents that threaten an organization's computer security is a complex endeavor. Frequently an organization's primary focus is on the response aspects of security incidents, which results in its failure to manage incidents beyond simply reacting to threatening events. In this SEI Podcast, Robin Ruefle and Mark Zajicek discuss recent work that provides a baseline or benchmark of incident management practices for an organization. They also examine the importance of focusing on preparation for incident management; along with coordination and communication of analysis and response activities.
Listen to the podcast.

Machine Learning in Cybersecurity: A Guide
by Jonathan Spring, Joshua Fallon, April Galyardt, Angela Horneman, Leigh B. Metcalf, Ed Stoner

This report lists relevant questions that decision makers should ask of machine-learning practitioners before employing machine learning (ML) or artificial intelligence (AI) solutions in the area of cybersecurity. Like any tool, ML tools should be a good fit for the purpose they are intended to achieve. The questions in this report will improve decision makers' ability to select an appropriate ML tool and make it a good fit to address their cybersecurity topic of interest. In addition, the report outlines the type of information that good answers to the questions should contain.
Download the technical report.

Cyber Hygiene: Why the Fundamentals Matter
by Matthew J. Butkovic, Randall F. Trzeciak, Matthew Trevors

In this webcast, as a part of National Cybersecurity Awareness Month, our experts provided an overview of the concept of cyber hygiene, which bears an analogy to the concept of hygiene in the medical profession. Like the practice of washing hands to prevent infections, cyber hygiene addresses simple sets of actions that users can take to help reduce cybersecurity risks. Matt Butkovic, Randy Trzeciak, and Matt Trevors discussed what some of those practices are, such as implementing password security protocols and determining which other practices an organization should implement. Finally, they discuss the special case of phishing--which is a form of attack that can bypass technical safeguards and exploit people's weaknesses--and how changes in behavior, understanding, and technology might address this issue.

Good cyber hygiene is important because an organization's threat landscape changes daily, and new variants of attacks on computer systems appear by the hour. The sheer number of security vulnerabilities in hardware, software, and underlying protocols--and in the dynamic threat environment--make it nearly impossible for most organizations to keep pace.
View the webinar.

Additional Resources

View the latest SEI research in the SEI Digital Library.
View the latest installments in the SEI Podcast Series.
View the latest installments in the SEI Webinar Series.

Written By

Douglas Schmidt (Vanderbilt University)

Digital Library Publications
Send a Message

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed