icon-carat-right menu search cmu-wordmark

The Latest Work from the SEI: Insider Risk, Bias in LLMs, Secure Coding, and Designing Secure Systems

Headshot of Bill Scherlis.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recent publications from the SEI in the areas of insider risk, bias in large language models (LLMs), secure coding and static analysis, and designing secure systems.

These publications highlight the latest work from SEI technologists in these areas. This post provides a summary for each publication and includes links for access on the SEI website.

Dangers of AI for Insider Risk Evaluation (DARE)
by Austin Whisnant

Artificial intelligence (AI) holds the promise of reducing insider risk incidents, but it comes with a unique set of challenges. This white paper outlines the potential pitfalls of leveraging AI for insider risk analysis and suggests methods for mitigating those challenges. Section 1 explains AI and its many implementations and applications, including those specific to the domain of insider risk. Section 2 outlines the challenges and pitfalls of AI and how those apply specifically to insider risk analysis. Section 3 discusses at what point it is appropriate to use AI in the insider risk domain and what to consider when implementing these methods operationally.
Read the SEI white paper.

Using Role-Playing Scenarios to Identify Bias in LLMs
by Katherine-Marie Robinson and Violet Turri

Harmful biases in large language models (LLMs) make these models less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make this form of AI safer. In this podcast, Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.
Listen to/watch the SEI podcast.
Read the SEI Blog post Auditing Bias in Large Language Models.

Static Analysis-Targeted Automated Repair to Secure Code and Reduce Effort
by Lori Flynn and David Svoboda

Static analysis tools scan code, producing many defect alerts, but the alerts require expert effort to validate. We developed an extensible tool that automatically repairs associated code for three specific types of alerts. With common tools, users can review/accept any repairs. We demo and describe how our tool secures code and saves effort.

Static analysis (SA) is a standard testing method used to analyze source code for defects. Most SA tools use heuristic techniques and tend to produce many alerts, of which many are false positives. The cost of experts manually assessing alerts represents a significant barrier to adoption of this key technology for reducing security defects. As a result, most organizations limit the scope of types of code flaws they look for. This presentation talks about our FY23-24 project researching using SA alerts to target automated program repair (APR) technology to fix defects. In this presentation, we discuss our design choices, development methods, and experimental test results. We show how our repair tool can be used during test & evaluation and during development, whether using continuous integration (CI) automation or more manual processes. Then, we invite discussion about ways our current repair tool could be extended that would be helpful to developers and evaluators. By design, our automated code repairs do not break the code, regardless of whether the alert is a true or false positive. Code repairs that eliminate false positive alerts are useful in two ways: (1) expert effort is reserved for adjudicating remaining alerts; and (2) the code can become easier to understand by humans, for code development and security analysis. We focus on C/C++ because we did not find open source APR tool documentation that explicitly focuses on violations of CERT C secure coding rules. We also benefit from Clang’s new JSON API. The Clang C/C++ compiler is open-source, cost-free, and widely used. Additionally, we benefit from the Clang ability to export abstract syntax trees (AST) as JSON files, facilitating mapping SA alerts to the AST nodes and thus focusing code repair effort.
Read the conference paper.
Listen to/watch the SEI podcast Automated Repair of Static Analysis Alerts.

Assurance Evidence of Continuously Evolving Real-Time Systems (ASERT) Workshop 2024
By Dionisio de Niz, Bjorn Andersson, Mark H. Klein, Hyoseung Kim (University of California, Riverside), John Lehoczky (Carnegie Mellon University), George Romanski (Federal Aviation Administration), Jonathan Preston (Lockheed Martin Corporation), Daniel Shapiro (Institute of Defense Analysis), Floyd Fazi (Lockheed Martin Corporation), and Ronald Koontz (Boeing Company)

The second Assurance Evidence for Continuously Evolving Real-Time Systems (ASERT) workshop was held July 30 to 31, 2024, in Arlington, VA. It brought together the members of the ASERT workgroup and included keynote speakers from the FAA, DOT&E, and DTE&A.

In this second workshop we reported on experiment zero, where we analyzed the flight incident of the flight CI202 in Taiwan in 2020. We also discussed with our keynote speakers the challenges faced in development test and evaluation also in the operation phases that are the focus of this workgroup.

In this document we summarize the discussions and feedback for the experiment zero presentation and ideas for the next experiment and on the development of the ASERT roadmap.
Read the special report.

Independent Verification and Validation for Agile Projects
by Justin Smith

Traditionally, independent verification and validation (IV&V) is performed by an independent team at program milestones and at the conclusion of development when software is formally delivered. This traditional approach allows an IV&V team to provide input at the various formal milestone gates. As more programs move to an Agile approach, however, milestones aren’t as clearly defined. Requirements, design, implementation, and testing can all happen iteratively, sometimes spread over multiple years of development. In this Agile paradigm, IV&V teams may struggle to figure out how to add value to the program at earlier points in the lifecycle by getting in phase with agile development cycles. This webcast highlights a novel approach to providing IV&V for projects using an Agile or iterative software development including the following:

  • What adopting an Agile mindset for IV&V could look like
  • How focusing on capabilities and using a risk-based perspective could help drive planning for your team
  • Techniques to help the IV&V team get more in phase with the developer while remaining independent

View the webcast.
Read the SEI blog post Incorporating Agile Principles into Independent Verification and Validation

Self-Assessment in Training and Exercise
by Dustin D. Updyke, Thomas G. Podnar, John Yarger, and Sean Huff

In this report, we introduce an approach to performance evaluation for cyber operators that focuses on self-assessment. We find that this approach provides both greater information fidelity to satisfy performance assessment objectives and the enhanced realism that cyber operators desired in training and exercise (T&E) activities. We implement an incident response tool that enables team members to record their actions and thought processes and facilitate assessing the team’s abilities. To validate our approach, we conducted a survey of participants who used the tool to gather qualitative feedback on its effectiveness. The results of this survey highlight the perceived improvements in realism, the usefulness of self-assessment tools, and the overall impact on team dynamics and individual growth. This combined approach provides insights into team performance, enables best practices to be identified, supports the refinement of mitigation strategies, and fosters actionable feedback for learning. By promoting self-assessment within a realistic T&E environment, this method improves overall team performance in cybersecurity operations through feedback on individual skills and leadership competencies.
Read the technical report.

Three Key Elements for Designing Secure Systems[WS1]
by Timothy A. Chick

To make secure software by design a reality, engineers must intentionally build in security throughout the software development lifecycle. In this podcast, Timothy A. Chick, technical manager of the Applied Systems Group in the SEI’s CERT Division, discusses designing, building, and operating secure systems.
Listen to/watch the SEI podcast.

Cybersecurity Metrics: Protecting Data and Understanding Threats
by Bill Nichols

Scoping down objectives and determining what kinds of data to gather are persistent challenges in cybersecurity. In this SEI podcast, Bill Nichols, who leads the SEI’s Software Engineering Measurements and Analysis Group, discusses the importance of cybersecurity measurement, what kinds of measurements are used in cybersecurity, and what those metrics can tell us about cyber systems.
Listen to/watch the SEI podcast.

Cyber Challenges in Health Care: Managing for Operational Resilience
by Matthew J. Butkovic

In this webcast, Matthew Butkovic and Darrell Keeling explore approaches to maximize return on cybersecurity investment in the health-care context.

Health-care organizations are seemingly besieged by a complex set of cyber threats. The consequences of disruptive cyber events in health care are in many ways especially troubling. Health-care organizations often face cyber challenges with modest resources. In this webcast, Matthew Butkovic and Darrell Keeling explore approaches to maximize return on cybersecurity investment in the health-care context. This includes applying measures of operational resilience including the following:

  • How to yield maximum return on cybersecurity investment in health care
  • How to shift thinking from cybersecurity to operational resilience
  • How to employ free or low-cost cybersecurity resources in the health-care context

View the webcast.

Additional Resources

View the latest SEI research in the SEI Digital Library.
View the latest podcasts in the SEI Podcast Series.
View the latest installments in the SEI Webcast Series.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed