This post is also co-authored by Douglas C. Schmidt and William Scherlis.
In its effort to increase the capability of the warfighter, the Department of Defense (DoD) has made incremental changes in its acquisition practices for building and deploying military capacity. This capacity can be viewed as "platforms" (tanks, ships, aircraft, etc.) and the mission system "payloads" (sensors, command and control, weapons, etc.) that are populated onto those platforms to deliver the desired capability. This blog post, the first in a series excerpted from a recently published paper, explores opportunities in modularity and open systems architectures with the aim of helping the DoD deliver higher quality software to the warfighter with far greater innovation in less time.
A software product line is a collection of related products with shared software artifacts and engineering services that has been developed by a single organization intended to serve different missions and different customers. In industry, product lines provide both customer benefits (such as functionality, quality, and cost) and development organization benefits (such as time to market and price-margin). Moreover, these benefits last through multiple generations of products. This blog is the first in a series of three posts on sustaining product lines in terms of required decisions and potential benefits of proposed approaches. In this post, I identify the potential benefits of a product line and discuss contracting issues in the context of Department of Defense (DoD) programs.
As Soon as Possible
In the first post in this series, I introduced the concept of the Minimum Viable Capability (MVC). While the intent of the Minimum Viable Product (MVP) strategy is to focus on rapidly developing and validating only essential product features, MVC adapts this strategy to systems that are too large, too complex, or too critical for MVP.
MVC is a scalable approach to validating a system of capabilities, each at the earliest possible time. Capability scope is limited (minimum) so that it may be produced as soon as possible. For MVP, as soon as possible is often a just a few weeks. But what does as soon as possible mean for an MVC? This post explores how technical dependencies and testability determine that, and what this implies for a system roadmap. Let's start with the pattern of MVC activities to produce a major release.
This post was co-authored by Cecilia Albert and Harry Levinson.
At the SEI we have been involved in many programs where the intent is to increase the capability of software systems currently in sustainment. We have assisted government agencies who have implemented some innovative contracting and development strategies that provide benefits to those programs. The intent of the blog is to explain three approaches that could help others in the DoD or federal government agencies who are trying to add additional capability to systems that are currently in sustainment. Software sustainment activities can include correcting known flaws, adding new capabilities, updating existing software to run on new hardware (often due to obsolescence issues), and updating the software infrastructure to make software maintenance easier.
It's common for large-scale cyber-physical systems (CPS) projects to burn huge amounts of time and money with little to show for it. As the minimum viable product (MVP) strategy of fast and focused stands in sharp contrast to the inflexible and ponderous product planning that has contributed to those fiascos, MVP has been touted as a useful corrective. The MVP strategy has become fixed in the constellation of Agile jargon and practices. However, trying to work out how to scale MVP for large and critical CPS, I found more gaps than fit. This is the first of three blog posts highlighting an alternative strategy that I created, the Minimum Viable Capability (MVC), which scales the essential logic of MVP for CPS. MVC adapts the intent of the MVP strategy--to focus on rapidly developing and validating only essential features--to systems that are too large, too complex, or too critical for MVP.
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in deep learning, cyber intelligence, interruption costs, digital footprints on social networks, managing privacy and security, and network traffic analysis. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
When the rate of change inside an institution becomes slower than the rate of change outside, the end is in sight. - Jack Welch
In a world of agile everything, agile concepts are being applied in areas well beyond software development. At the NDIA Agile in Government Summit held in Washington, D.C. in June, Dr. George Duchak, the Deputy Assistant Secretary of Defense for Cyber, Command & Control, Communications & Networks, and Business Systems, spoke about the importance of agility to organizational success in a volatile, uncertain, complex, and ambiguous world. Dr. Duchak told the crowd that agile software development can't stand alone, but must be woven into the fabric of an organization and become a part of the way an organization's people, processes, systems and data interact to deliver value. The business architecture must be constructed for agility.
I first wrote about agile strategic planning in my March 2012 blog post, Toward Agile Strategic Planning. In this post, I want to expand that discussion to look closer at agile strategy, or short-cycle strategy development and execution, describe what it looks like when implemented, and examine how it supports organizational performance.
For many DoD missions, our ability to collect information has outpaced our ability to analyze that information. Graph algorithms and large-scale machine learning algorithms are a key to analyzing the information agencies collect. They are also an increasingly important component of intelligence analysis, autonomous systems, cyber intelligence and security, logistics optimization, and more. In this blog post, we describe research to develop automated code generation for future-compatible graph libraries: building blocks of high-performance code that can be automatically generated for any future platform.
In the SEI's examination of the software sustainment phase of the Department of Defense (DoD) acquisition lifecycle, we have noted that the best descriptor for sustainment efforts for software is "continuous engineering." Typically, during this phase, the hardware elements are repaired or have some structural modifications to carry new weapons or sensors. Software, on the other hand, continues to evolve in response to new security threats, new safety approaches, or new functionality provided within the system of systems. In this blog post, I will examine the intersection of three themes--product line practices, software sustainment, and public-private partnerships--that emerged during our work with one government program. I will also highlight some issues we have uncovered that deserve further discussion and research.
Each year since the blog's inception, we present the 10 most-visited posts of the year in descending order ending with the most popular post. In this blog post, we present the 10 most popular posts published between January 1, 2017 and December 31, 2017.
There's been a widespread movement in recent years from traditional waterfall development to Agile approaches in government software acquisition programs. This transition has created the need for personnel who oversee government software acquisitions to become fluent in metrics used to monitor systems developed with Agile methods. This post, which is a follow-up to my earlier post on Agile metrics, presents updates on our Agile-metrics work based on recent interactions with government programs.
As the defense workforce attracts younger staff members, this digital native generation is having an effect. "To accommodate millennial IT workers, so-called 'digital natives,'" wrote Phil Goldstein in a May 2016 FedTech article, "the service branches of the Department of Defense need to square cybersecurity with the attitudes and behaviors of younger employees, according to senior defense IT officials." Digital natives approach technology differently than digital immigrants, which includes those born before the widespread use of technology. In this blog post, I explore five classic transition models to determine what, if any, considerations we need to account for in today's environment that are different from when they were first published, many of them before the digital natives phenomenon was identified.
The five models are related to technology transition and adoption, and they answer the following questions:
- What kind of technology is it?
- How big is the adoption being contemplated?
- Who will be adopting the new technology?
- What must change agents or technologists do to improve the chance of the technology's success?
- How do we help people get from their current environment to one that leverages the new technology?
Each of these questions is supported by one or more 20th century transition models. Some are still useful as is; others may need to be adapted to the current environment. The observations about digital natives and digital immigrants come from my personal observations over the last 15 years in working with both populations, primarily transitioning practice-based technologies, such as Agile methods.
The first post in this series introduced the basic concepts of multicore processing and virtualization, highlighted their benefits, and outlined the challenges these technologies present. The second post addressed multicore processing, whereas the third and fourth posts concentrated on virtualization via virtual machines (VMs) and containers (containerization), respectively. This fifth and final post in the series provides general recommendations for the use of these three technologies--multicore processing, virtualization via VMs, and virtualization via containers--including mitigating their associated challenges.
The first blog entry in this series introduced the basic concepts of multicore processing and virtualization, highlighted their benefits, and outlined the challenges these technologies present. The second post addressed multicore processing, whereas the third post concentrated on virtualization via virtual machines. In this fourth post in the series, I define virtualization via containers, list its current trends, and examine its pros and cons, including its safety and security ramifications.
This posting is the third in a series that focuses on multicore processing and virtualization, which are becoming ubiquitous in software development. The first blog entry in this series introduced the basic concepts of multicore processing and virtualization, highlighted their benefits, and outlined the challenges these technologies present. The second post addressed multicore processing. This third posting concentrates on virtualization via virtual machines (VMs). Below I define the relevant concepts underlying virtualization via VMs, list its current trends, and examine its pros and cons.
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts and webinars highlighting our work in coordinated vulnerability disclosure, scaling Agile methods, automated testing in Agile environments, ransomware, and Android app analysis. These publications highlight the latest work of SEI technologists in these areas. One SEI Special Report presents data related to DoD software projects and translated it into information that is frequently sought-after across the DoD. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
The first blog entry in this series introduced the basic concepts of multicore processing and virtualization, highlighted their benefits, and outlined the challenges these technologies present. This second post will concentrate on multicore processing, where I will define its various types, list its current trends, examine its pros and cons, and briefly address its safety and security ramifications.
Multicore processing and virtualization are rapidly becoming ubiquitous in software development. They are widely used in the commercial world, especially in large data centers supporting cloud-based computing, to (1) isolate application software from hardware and operating systems, (2) decrease hardware costs by enabling different applications to share underutilized computers or processors, (3) improve reliability and robustness by limiting fault and failure propagation and support failover and recovery, and (4) enhance scalability and responsiveness through the use of actual and virtual concurrency in architectures, designs, and implementation languages. Combinations of multicore processing and virtualization are also increasingly being used to build mission-critical, cyber-physical systems to achieve these benefits and leverage new technologies, both during initial development and technology refresh.
In this introductory blog post, I lay the foundation for the rest of the series by defining the basic concepts underlying multicore processing and the two main types of virtualization: (1) virtualization by virtual machines and hypervisors and (2) virtualization by containers. I will then briefly compare the three technologies and end by listing some key technical challenges these technologies bring to system and software development.
The crop of Top 10 SEI Blog posts in the first half of 2017 (judged by the number of visits by our readers) represents the best of what we do here at the SEI: transitioning our knowledge to those who need it. Several of our Top 10 posts this year are from a series of posts on best practices for network security that we launched in November 2016 in the wake of the Dyn attack. In this post, we will list the Top 10 posts with an excerpt from each post as well as links to where readers can go for more information about the topics covered in the SEI blog.
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI technical reports, white papers, podcasts and webinars on supply chain risk management, process improvement, network situational awareness, software architecture, network time protocol as well as a podcast interview with SEI Fellow Peter Feiler. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
This post is coauthored by Carol Woody.
Software is a growing component of business and mission-critical systems. As organizations become more dependent on software, security-related risks to their organizational missions also increase. We recently published a technical note that introduces the prototype Software Assurance Framework (SAF), a collection of cybersecurity practices that programs can apply across the acquisition lifecycle and supply chain. We envision program managers using this framework to assess an acquisition program's current cybersecurity practices and chart a course for improvement, ultimately reducing the cybersecurity risk of deployed software-reliant systems. This blog post, which is excerpted from the report, presents three pilot applications of SAF.
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI technical reports, white papers, podcasts and webinars on software assurance, data governance, self-adaptive systems, engineering high-assurance software for distributed adaptive real-time (DART) systems, technical debt, and automating malware collection and analysis. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
Since its debut on Jeopardy in 2011, IBM's Watson has generated a lot of interest in potential applications across many industries. I recently led a research team investigating whether the Department of Defense (DoD) could use Watson to improve software assurance and help acquisition professionals assemble and review relevant evidence from documents. As this blog post describes, our work examined whether typical developers could build an IBM Watson application to support an assurance review.
First responders, search-and-rescue teams, and military personnel often work in "tactical edge" environments defined by limited computing resources, rapidly changing mission requirements, high levels of stress, and limited connectivity. In these tactical edge environments, software applications that enable tasks such as face recognition, language translation, decision support, and mission planning and execution are critical due to computing and battery limitations on mobile devices. Our work on tactical cloudlets addresses some of these challenges by providing a forward-deployed platform for computation offload and data staging (see previous posts).
When establishing communication between two nodes--such as a mobile device and a tactical cloudlet in the field--identification, authentication, and authorization provide the information and assurances necessary for the nodes to trust each other (i.e., mutual trust). A common solution for establishing trust is to create and share credentials in advance and then use an online trusted authority to validate the credentials of the nodes. The tactical environments in which first responders, search-and-rescue, and military personnel operate, however, do not consistently provide access to that online authority or certificate repository because they are disconnected, intermittent, limited (DIL). This blog post, excerpted from the recently published IEEE paper "Establishing Trusted Identities in Disconnected Edge Environments"--I coauthored this paper with Sebastián Echeverría, Dan Klinedinst, Keegan Williams--presents a solution for establishing trusted identities in disconnected environments based on secure key generation and exchange in the field, as well as an evaluation and implementation of the solution.
Interest in Agile and lightweight development methods in the software development community has become widespread. Our experiences with the application of Agile principles have therefore become richer. In my blog post, Toward Agile Strategic Planning, I wrote about how we can apply Agile principles to strategic planning. In this blog post, I apply another Agile concept, technical debt, to another organizational excellence issue. Specifically I explore whether organizational debt is accrued when we implement quick organizational change, short-cutting what we know to be effective change management methods. Since I started considering this concept, Steve Blank wrote a well-received article about organizational debt in the context of start-up organizations. In this post, I describe organizational debt in the context of change management and describe some effects of organizational debt we are seeing with our government clients.