search menu icon-carat-right cmu-wordmark

A Framework for Evaluating Common Operating Environments

Headshot of Steve Rosemergy

Large-scale DoD acquisition programs are increasingly being developed atop reusable software platforms--known as Common Operating Environments (COEs) --that provide applications and end-users with many net-centric capabilities, such as cloud computing or Web 2.0 applications, including data-sharing, interoperability, user-centered design, and collaboration. Selecting an appropriate COE is critical to the success of acquisition programs, yet the processes and methods for evaluating COEs had not been clearly defined. I explain below how the SEI developed a Software Evaluation Framework and applied it to help assess the suitability of COEs for the US Army.

In May 2010, the Pentagon published the executive order, Army Enterprise Common Operating Environment Convergence Plan, which called for the consolidation of two network strategies into one plan. The goal of this plan is to consolidate strategies so the U.S. Army can be more agile when deploying applications in mobile and non-mobile environments. The System of Systems Engineering Organization within the Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA[ALT]) asked the SEI to help develop a software evaluation framework for assessing programs that implement large-scale and/or enterprise operating environments and programs.

As part of this effort, the SEI helped the Army understand prospective program strengths and weaknesses to determine the right questions to ask, how to frame the questions to minimize distractions and misinformation, and how to correlate the data. As a member of the SEI's Acquisition Support Program (ASP), my role was to develop the software evaluation framework for the Army's COE. This COE is an infrastructure that defines a set of reference standards (both commercial and problem domain-specific), software interfaces, data formats, protocols, and system services that enable distributed computing and allow distributed applications to communicate, coordinate, and execute tasks, and respond to events in an integrated and/or predictable manner. The Army's COE infrastructure also includes oversight mechanisms to govern both the evolution of the infrastructure and compliance over applications intended for deployment.

The goals of our effort with the Army were similar to engagements ASP conducts in other large-scale software-reliant DoD systems: improved agility and reduced costs. We first determined who was involved in the evaluation, and developed a common understanding among stakeholders with respect to the evaluation scope and foundational language. We then developed mechanisms for uncovering an objective insight into mechanisms for evaluating prospective technical solutions. Finally, we reconciled that objective picture and the Army's goals in undertaking this approach.

Our COE evaluation framework leveraged the SEI's work on service-oriented architecture for mission-critical systems-of-systems to identify key quality attributes. These attributes are non-functional requirements that are used to evaluate the performance of a system. They fall into broad categories including architecture, adaptability, business objectives, functional capability, interoperability, support for governance, licensing, quality of service, and performance. We then developed a process for eliciting the Army's objectives, data collection, and technical solution characterization. Finally, to accelerate and reinforce the constructs of the framework, we developed a tool that operationalized the software evaluation framework by codifying a set of practices, questions, and an evaluation container. The evaluation container was the tool; it operationalized the process inputs, outputs, and results.

The Army used our framework to evaluate one prospective technical solution for Army COE under development. The SEI participated by facilitating the elicitation process to derive common terminology, the collection of framework data, and correlation activities between goals and specific quality attributes. Based on our experience, the software evaluation framework provided much more than a tool for evaluating technical solutions--it provided a transformative capability that institutionalized common understanding of the technical terms, the scope of the work, and the importance and role of goals in the evaluation process. Army migration to a COE is underway and our software evaluation framework helped accelerate and focus this migration effort.

The recently-published report, A Framework for Evaluating Common Operating Environments: Piloting, Lessons Learned, and Opportunities, highlights our recent engagement with the Army's COE assessment process. The report also explores the interdependencies between the common language that normalizes the understanding between stakeholders to ensure comparison of commensurable systems and consistent outcomes, business drivers, and software architecture comprising the SEI's software evaluation framework.

To download a copy of the report, please visit www.sei.cmu.edu/library/abstracts/reports/10sr025.cfm

CITE

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed