search menu icon-carat-right cmu-wordmark

Resilience, Model-Driven Engineering, Software Quality, and Android App Analysis - The Latest Research from the SEI

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in governing operational resilience, model-driven engineering, software quality, Android app analysis, software architecture, and emerging technologies. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Defining a Maturity Scale for Governing Operational Resilience
By Katie C. Stewart, Julia H. Allen, Audrey J. Dorofee, Michelle A. Valdez, Lisa R. Young

Achieving operational resilience in today's environment is becoming increasingly complex as the pace of technology and innovation continues to accelerate. Sponsorship, strategic planning, and oversight of operational resilience are the most crucial activities in developing and implementing an effective operational resilience management (ORM) system. These governance activities are described in detail in the CERT® Resilience Management Model enterprise focus (EF) process area (PA). To ensure operational resilience, an organization must identify shortfalls across these defined activities, make incremental improvements, and measure improvement against a defined, accepted maturity scale. The current version of the CERT Resilience Management Model (CERT-RMM V1.2) utilizes a maturity architecture (levels and descriptions) that may not meet the granularity needs for organizations committed to making incremental improvements in governing operational resilience. To achieve a more granular approach, the CERT-RMM Maturity Indicator Level (MIL) scale was developed for application across all CERT-RMM PAs.

The CERT Division of the SEI is conducting ongoing research around the current state of the practice of governing operational resilience and developing specific actionable steps for improving the governance of operational resilience. Study results provide the specific EF PA MIL scale for assessing maturity, identifying incremental improvements, and measuring improvements.
Download a PDF of the Report

Model-Driven Engineering: Automatic Code Generation and Beyond
By John Klein, Harry L. Levinson, Jay Marchetti

Increasing consideration of model-driven engineering (MDE) tools for software development efforts means that acquisition executives must more often deal with the following challenge: Vendors claim that by using MDE tools, they can generate software code automatically and achieve high developer productivity. However, MDE consists of more than code generation tools; it is also a software engineering approach that can affect the entire lifecycle of a system from requirements gathering through sustainment. This report focuses on the application of MDE tools for automatic code generation when acquiring systems built using these software development tools and processes. The report defines some terminology used by MDE tools and methods, emphasizing that MDE consists of both tools and methods that must align with overall acquisition strategy. Next, it discusses how the use of MDE for automatic code generation affects acquisition strategy and introduces new risks to the program. It then offers guidance on selecting, analyzing, and evaluating MDE tools in the context of risks to an organization's acquisition effort through-out the system lifecycle. Appendices provide a questionnaire that an organization can use to gather information about vendor tools along with criteria for evaluating tools mapped to the questionnaire that relate to acquisition concerns.

A supplementary file also available through the spreadsheet link is the Questionnaire Template. It contains the questionnaire used in this study and is available for download and use to collect information from vendors for your own model-driven engineering tool assessments.
Download a PDF of the Report

Improving Quality Using Architecture Fault Analysis with Confidence Arguments
By Peter H. Feiler, Charles B. Weinstock, John B. Goodenough, Julien Delange, Ari Z. Klein, Neil Ernst (University of British Columbia)

This case study shows how an analytical architecture fault-modeling approach can be combined with confidence arguments to diagnose a time-sensitive design error in a control system and to provide evidence that proposed changes to the system address the problem. The analytical approach, based on the SAE Architecture Analysis and Design Language (AADL) for its well-defined timing and fault behavior semantics, demonstrates that such hard-to-test errors can be discovered and corrected early in the lifecycle, thereby reducing rework cost. The case study shows that by combining the analytical approach with confidence maps, we can present a structured argument that system requirements have been met and problems in the design have been addressed adequately--increasing our confidence in the system quality. The case study analyzes an aircraft engine control system that manages fuel flow with a stepper motor. The original design was developed and verified in a commercial model-based development environment without discovering the potential for missed step commanding. During system tests, actual fuel flow did not correspond to the desired fuel flow under certain circumstances. The problem was traced to missed execution of commanded steps due to variation in execution time.
Download a PDF of the Report

Making DidFail Succeed: Enhancing the CERT Static Taint Analyzer for Android App Sets
By Jonathan Burket, Lori Flynn, Will Klieber, Jonathan Lim, Wei Shen, William Snavely

This report describes recent significant enhancements to DidFail (Droid Intent Data Flow Analysis for Information Leakage), the CERT static taint analyzer for sets of Android apps. In addition to improving the analyzer itself, the enhancements include a new testing framework, new test apps, and test results. A framework for testing the DidFail analyzer, including a setup for cloud-based testing was developed and instrumented to measure performance. Cloud-based testing enables the parallel use of powerful, commercially available virtual machines to speed up testing. DidFail was also modified to use the most current version of FlowDroid and Soot, increasing its success rate from 18 percent to 68 percent on our test set of real-world apps. Analytical features were added for more types of components and shared static fields and new apps developed to test these features. The improved DidFail analyzer and the cloud-based testing framework were used to test the new apps and additional apps from the Google Play store.
Download a PDF of the Report

Eliminative Argumentation: A Basis for Arguing Confidence in System Properties
By John B. Goodenough, Charles B. Weinstock, Ari Z. Klein

Assurance cases provide a structured method of explaining why a system has some desired property, for example, that the system is safe. But there is no agreed approach for explaining what degree of confidence one should have in the conclusions of such a case. This report defines a new concept, eliminative argumentation, which provides a philosophically grounded basis for assessing how much confidence one should have in an assurance case argument. This report will be of interest mainly to those familiar with assurance case concepts and who want to know why one argument rather than another provides more confidence in a claim. The report is also potentially of value to those interested more generally in argumentation theory.
Download a PDF of the Report

Emerging Technology Domains Risk Survey
By Christopher King, Jonathan Chu, Andrew O. Mellinger

In today's increasingly interconnected world, the information security community must be prepared to address emerging vulnerabilities that may arise from new technology domains. Understanding trends and emerging technologies can help information security professionals, leaders of organizations, and others interested in information security to anticipate and prepare for such vulnerabilities.
This report, originally prepared in 2014 for the Department of Homeland Security United States Computer Emergency Readiness Team (US-CERT), provides a snapshot in time of the current understanding of future technologies. Each year, this report will be updated to include new estimates of adoption timelines, new technologies, and adjustments to the potential security impact of each domain. This report will also help US-CERT to make an informed decision about the best areas to focus resources for identifying new vulnerabilities, promoting good security practices, and increasing understanding of systemic vulnerability risk.
Download a PDF of the Report

Additional Resources

For the latest SEI technical reports and notes, please visit https://resources.sei.cmu.edu/library/.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed