icon-carat-right menu search cmu-wordmark
Our Research

AI-Augmented Software Engineering

Artificial Intelligence (AI) can accelerate the development, testing, and deployment of software, which is crucial for the Department of Defense (DoD), especially in contexts where delays can have national security implications.

At almost every stage of the software development process, AI holds the promise of assisting humans. The SEI is working with commercial and government research partners worldwide to reenvision the software lifecycle and define what AI-augmented software development will look like at each stage of the development process and during continuous evolution, where AI will be particularly useful.

Research and development in this area primarily focus on identifying and developing approaches for automating software engineering tasks and accelerating the development of reliable automation for engineering. In addition to a reimagined development process, the SEI is focusing on a range of key tasks for increasing the use of AI in software engineering, such as

  • developing reliable automated tools that interact with developers to assist with code evolution and refactoring
  • enabling the use of AI-generated metadata to efficiently verify or validate code and generate traceable evidence
  • scaling auto code generation and repair by including AI-augmentation in model-based techniques and formal methods
  • acquiring the data needed to model and evaluate new AI-augmented workflows

SEI Is Transforming Software Engineering with AI

The SEI is applying AI to big software engineering challenges for the DoD, such as software modernization. Many potential benefits of AI are being examined in the context of modernization: automating tasks, accelerating processes, enhancing code quality, evaluating legacy code, generating documentation, translating code, and even generating new test cases.

One project, Shift Left with Generative AI: Automating Library Replacement, is working to create a workflow and a prompting algorithm for a human-in-the-loop strategy that decomposes common changes in library replacement into problems that can be more easily solved using large language models. This approach enables the DoD and other organizations to address an open-source library upgrade once and efficiently roll out automated changes to many code bases. Another project, Untangling the Knot, uses AI in an architecture context to recommend a set of refactorings that isolates functionality from dependencies with the rest of the system.

AI-powered tools that support developers are evolving quickly, as noted in the SEI’s report to the congressional defense committee on technical debt in software-intensive systems. While AI tools can easily generate large amounts of code, rushing to deploy such tools today may be creating a growing wave of future technical debt for industry. While the software engineering community does not yet know the implications of these emerging tools, research from the SEI and others can empower their targeted development to help avoid unintentional technical debt and to better track intentional technical debt.

A joint project by the SEI and the Army AI Integration Center (AI2C) is tackling another challenge related to the use of AI-augmented software engineering: creating safer and more reliable machine learning (ML) systems. ML systems are notoriously difficult to test for a variety of reasons, including challenges around properly defining requirements and evaluation criteria. Without proper testing, systems that contain ML components can fail in production, sometimes with serious consequences. Machine Learning Test and Evaluation (MLTE) is both a process that facilitates the gathering and evaluation of requirements for ML systems and a tool to support the Test & Evaluation process.

The Latest from the SEI Blog

Generative AI and Software Engineering Education

Blog Page
and

Educators have had to adapt to rapid developments in generative AI to provide a realistic perspective to their students. In this post, experts discuss generative AI and software engineering education.

READ

Application of Large Language Models (LLMs) in Software Engineering: Overblown Hype or Disruptive Change?

Blog Page
, , , and

This blog post explores large language models (LLMs) in software development, implications of incorporating LLMs into software-reliant systems, and areas where more research is needed to advance their use in software engineering.

READ

Latest from the Digital Library

DeepSeek V3 and R1: An Overview of Technology Innovations and Implications for United States National Security

White Paper
, , , , and

In this paper, SEI researchers perform an initial analysis of three questions regarding impacts of the DeepSeek V3 and R1 model releases.

Read

Improving Machine Learning Test and Evaluation with MLTE

Podcast
, , , and

Machine learning (ML) models commonly experience issues when integrated into production systems. MLTE provides a process and infrastructure for ML test and evaluation.

Listen

Explore Our AI-Augmented Software Engineering Projects