The Latest Work from the SEI: AI, Deepfakes, Automated Alert Handling, and Cyber Intelligence
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in artificial intelligence, STEM careers, deepfakes, automated alert handling (here and here), systems and software engineering, and cyber intelligence. These publications highlight the latest work of SEI technologists in these areas. The SEI also made available an online version of the 2018 Year in Review, which highlights the recent work of the institute.
This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
2018 Year in Review By the Software Engineering Institute
The Software Engineering Institute (SEI) is a federally funded research and development center (FFRDC) sponsored by the U.S. Department of Defense and operated by Carnegie Mellon University. The SEI's mission is to advance the technologies and practices needed to acquire, develop, operate, and sustain software systems that are innovative, affordable, trustworthy, and enduring. The SEI Year in Review highlights the work of the institute undertaken during the previous fiscal year.
Download the 2018 Year in Review.
Using AI to Build More Secure Software By Mark Sherman
The continuing growth of MITRE's Common Vulnerabilities and Exposures (CVE) list--now at nearly 150,000 entries--is a testament to how difficult it remains to build software that is resistant and resilient to attack. The development of SecDevOps aimed to improve the security of software, but no considerations have yet addressed how artificial intelligence might improve that aim. This presentation discusses the reasons that the development of secure software is a concern beyond the IT industry, lists the elements of a secure software development process, and reflects on how artificial intelligence could improve that process. This presentation also considers additional security issues facing the development of artificial intelligence software.
Download the presentation.
Deepfakes--What Can Really Be Done Today? By Rotem Guttman and Zach Kurtz
The term "deepfake" refers to the use of machine learning to produce content for essays or to modify photos and videos. When it comes to photos and videos, the images are often so realistic that viewers are not able to tell that they are fake. In this SEI Cyber Talk episode, Rotem Guttman and Zach Kurtz explain the kinds of machine learning that people use to create deepfakes, how they work, and what kind of content it's possible to produce with current technology. They also cover the techniques people use to create fraudulent content. Such techniques include using an actor to film a video and then replacing the actor's face with someone else's, as well as more advanced methods that can reproduce a person's body movements, voice, speech, and facial expressions to make that person appear to say or do something that he or she did not actually say or do. Finally, they discuss the current limitations of these technologies and techniques, and they forecast advances that might occur in the coming years.
Watch the video.
Automating Alert Handling Reduces Manual Effort
by Lori Flynn
Static analysis (SA) alerts about software code flaws require costly manual effort to validate (e.g., determine true or false) and repair. As a result, organizations often severely limit the types of alerts they manually examine to the types of code flaws they most worry about. That approach results in a tradeoff where many True flaws may never get fixed. To make alert handling more efficient, the SEI developed and tested novel software that enables the rapid deployment of a method to classify alerts automatically and accurately. We are implementing our solution in a new version of the SEI's SCALe--the Source Code Analysis Lab--application.
Watch the video.
SCAIFE: An Alert Auditing Classification Prototype
By Ebonie McNeil
In this SEI Cyber Minute, Ebonie McNeil explains how the Source Code Analysis Integrated Framework Environment or (SCAIFE) prototype is intended to be used by developers and analysts who manually audit alerts.
SCAIFE provides automatic alert classification using machine learning, which gives a level of confidence that the alert is true or false.
The SCAIFE prototype also enables organizations to apply formulas that prioritize static analysis alerts by using factors they care about.
Watch the video.
STEM + Diversity = Greater Technology Innovation
by Tom Longstaff and Grace Lewis
The fields of science, technology, engineering, and math (STEM) can contribute to a nation's progress because they promote innovation and improve many aspects of our lives. However, statistics show there is an imbalance in the workforce because women and minorities are less likely to pursue careers in STEM fields. In this SEI Cyber Talk episode, Tom Longstaff and Grace Lewis discuss how fixing this imbalance can help promote even greater innovation in STEM fields. They examine what true diversity means, and how representation not just in terms of race and gender, but also in terms of culture and backgrounds, can promote different points of view and lead to the discovery of new solutions to problems that STEM researchers are trying to solve. They discuss how to promote diversity by reaching out to students at the right age and involving mentors from underrepresented groups to help break stereotypes about what it means to work in a STEM field. They also explore different kinds of approaches and programs that are effective for schools, universities, and places of work--including FFRDCs like the SEI--to get students interested and involved in STEM fields.
Watch the video.
Systems Engineering--Software Engineering Interface for Cyber-Physical Systems
By Sarah Sheard, Michael E. Pafford (INCOSE Chesapeake), Mike Phillips
This paper describes work done by the Systems-Software Engineering Interface Working Group of INCOSE. The paper shows how software has grown since the early days of INCOSE, and thus the organizational and technical interfaces between system and software have expanded.
The paper also includes a table that describes the activities that have to be performed on a software-intensive system and the complementary roles to be performed by systems engineers and software engineers to ensure the activities are complete. The role of software in communicating among technical components of systems-of-systems and cyber-physical systems is emphasized.
Download the paper.
Cyber Intelligence: Best Practices and Biggest Challenges
By Jared Ettinger
Cyber intelligence is a rapidly changing field, and many organizations do not have the people, time, and funding in place to build a cyber intelligence team, according to a report on cyber intelligence released in late May by researchers in the SEI's Emerging Technology Center. Lead author Jared Ettinger discusses the findings of the report, which provides a snapshot of best practices and biggest challenges along with three guides for implementing cyber intelligence with artificial intelligence, the Internet of Things, and public cyber threat frameworks.
View the podcast.
This post has been shared 0 times.
More By The Author
The Latest Work from the SEI: Coordinated Vulnerability Disclosure, Cybersecurity Research, Cyber Risk and Resilience, and the Importance of Fostering Diversity in Software Engineering
Navigating People Concerns when Transitioning from Sustainment to Engineering Software-Reliant Systems
The Latest Work from the SEI: Artificial Intelligence, DevSecOps, and Security Incident Response
The Latest Work from the SEI: Privacy, Ransomware, Digital Engineering, and the Solar Winds Hack
More In Artificial Intelligence Engineering
Software Engineering for Machine Learning: Characterizing and Detecting Mismatch in Machine-Learning Systems
Get updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.