Category: Architecture

Software design problems, often the result of optimizing for delivery speed, are a critical part of long-term software costs. Automatically detecting such design problems is a high priority for software practitioners. Software quality tools aim to automatically detect violations of common software quality rules. However, since these tools bundle a number of rules, including rules for code quality, it is hard for users to understand which rules identify design issues in particular. This blog post presents a rubric we created that quickly and accurately separates design rules from non-design rules, allowing static analysis tool users to focus on the high-value findings.

Over the past six months, we have developed new security-focused modeling tools that capture vulnerabilities and their propagation paths in an architecture. Recent reports (such as the remote attack surface analysis of automotive systems) show that security is no longer only a matter of code and is tightly related to the software architecture. These new tools are our contribution toward improving system and software analysis. We hope they will move forward other work on security modeling and analysis and be useful to security researchers and analysts. This post explains the motivation of our work, the available tools, and how to use them.

Software engineers increasingly recognize technical debt as a problem they care about, but they lack methods and tools to help them strategically plan, track, and pay down debt. The concept provides a vocabulary to engage researchers from a practice point of view, but they often lack an empirical basis and data science on which to validate their work on technical debt. Our recent Dagstuhl Seminar on Managing Technical Debt in Software Engineering provided a venue for assessing how far the study of technical debt has come and envisioning where we need to go to further the science and practice of managing technical debt in the software life cycle.

Edward J. Schwartz, a research scientist on the vulnerability analysis team, co-authored this post.

Software engineers face a universal problem when developing software: weighing the benefit of an approach that is expedient in the short-term, but which can lead to complexity and cost over the long term. In software-intensive systems, these tradeoffs can create technical debt, which is a design or implementation construct that is expedient in the short term, but which sets up a technical context that can make future changes more costly or even impossible.

Safety-critical software must be analyzed and checked carefully. Each potential error, failure, or defect must be considered and evaluated before you release a new product. For example, if you are producing a quadcopter drone, you would like to know the probability of engine failure to evaluate the system's reliability. Safety analysis is hard. Standards such as ARP4761 mandate several analyses, such as Functional Hazard Assessment (FHA) and Failure Mode and Effect Analysis (FMEA). One popular type of safety analysis is Fault Tree Analysis (FTA), which provides a graphical representation of all contributors to a failure (e.g., error events and propagations). In this blog post, I present the concepts of the FTA and introduce a new tool to design and analyze fault trees.

The crop of Top 10 SEI blog posts published in the first half of 2016 (judged by the number of visits by our readers) represents a cross section of the type of cutting-edge work that we do at the SEI: at-risk emerging technologies, cyber intelligence, big data, vehicle cybersecurity, and what ant colonies can teach us about securing the internet. In all, readers visited the SEI blog more than 52,000 times for the first six months of 2016. We appreciate your readership and invite you to submit ideas for future posts in the comments section below. In this post, we will list the Top 10 posts in descending order (#10 to #1) and then provide an excerpt from each post, as well as links to where readers can go for more information about the topics covered in the SEI blog.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports, white papers, webinars, and podcasts. These publications highlight the latest work of SEI technologists in military situational analysis, software architecture, insider threat, honeynets, and threat modeling. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

Recent research has demonstrated that in large scale software systems, bugs seldom exist in isolation. As detailed in a previous post in this series, bugs are often architecturally connected. These architectural connections are design flaws. Static analysis tools cannot find many of these flaws, so they are typically not addressed early in the software development lifecycle. Such flaws, if they are detected at all, are found after the software has been in use; at this point they are far more costly and time-consuming to address.

In our first post in this series, we presented a tool that supports a new architecture model that can identify structures in the design (based on an analysis of a project's code base) that have a high likelihood of containing bugs. Typically, investment in refactoring to remove such design flaws has been difficult for architects to justify or quantify to their managers. The costs of refactoring are immediate and up-front. The benefits have, in the past, been vague and long-term. And so managers typically refuse to invest in refactoring.

In this post, which was excerpted from a recently published paper that I coauthored with Yuanfang Cai, Ran Mo, Qiong Feng, Lu Xiao, Serge Haziyev, Volodymyr Fedak, and Andriy Shapochka, we present a case study of our approach with SoftServe Inc., a leading software outsourcing company. In this case study we show how we can represent architectural technical debt (hereinafter, architectural debt) concretely, in terms of its cost and schedule implications (issues of importance that are easily understood by project managers). In doing so, we can create business cases to justify refactoring to remove root causes of the architectural debt.

Listen to an audio recording of this blog post.

When I was a chief architect working in industry, I was repeatedly asked the same questions: What makes an architect successful? What skills does a developer need to become a successful architect? There are no easy answers to these questions. For example, in my experience, architects are most successful when their skills and capabilities match a project's specific needs. Too often, in answering the question of what skills make a successful architect, the focus is on skills such as communication and leadership. While these are important, an architect must have strong technical skills to design, model, and analyze the architecture. As this post will explain, as a software system moves through its lifecycle, each phase calls for the architect to use a different mix of skills. This post also identifies three failure patterns that I have observed working with industry and government software projects.

This post was also co-authored by Carol Woody.

Increasingly, software development organizations are finding that a large number of their vulnerabilities stem from design weaknesses and not coding vulnerabilities. Recent statistics indicate that research should focus on identifying design weaknesses to alleviate software bug volume. In 2011, for example when MITRE released its list of the 25 most dangerous software errors, approximately 75 percent of those errors represented design weaknesses. Viewed through another lens, more than one third of the current 940 known common weakness enumerations (CWEs) are design weaknesses. Static analysis tools cannot find these weaknesses, so they are currently not addressed prior to implementation and typically are detected after the software has been executed when vulnerabilities are much more costly and time-consuming to address. In this blog post, the first in a series, we present a tool that supports a new architecture model that can identify structures in the design and code base that have a high likelihood of containing bugs, hidden dependencies in code bases, and structural design flaws.

As our world becomes increasingly software-reliant, reports of security issues in the interconnected devices that we use throughout our day (i.e., the Internet of Things) are also increasing. This blog post discusses how to capture security requirements in architecture models, use them to build secure systems, and reduce potential security defects. This post also provides an overview of our ongoing research agenda on using architecture models for the design, analysis, and implementation of secure cyber-physical systems and to specify, validate, and implement secure systems.

By Julien Delange
Member of the Technical Staff
Software Solutions Division

For decades, safety-critical systems have become more software intensive in every domain--in avionics, aerospace, automobiles, and medicine. Software acquisition is now one of the biggest production costs for safety-critical systems. These systems are made up of several software and hardware components, executed on different components, and interconnected using various buses and protocols. For instance, cars are now equipped with more than 70 electronic control units (ECUs) interconnected with different buses and require about 100 million source lines of code (SLOC) to provide driver assistance, entertainment systems, and all necessary safety features, etc. This blog post discusses the impact of complexity in software models and presents our tool that produces complexity metrics from software models.

By Douglas C. Schmidt
Principal Researcher

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports, technical notes, and white papers. These reports highlight the latest work of SEI technologists in Agile software development and Agile-at-scale, software architecture fault analysis, computer network design, confidence in system properties, and system-of-systems development as well as commentary from two CERT researchers on the Proposed BIS Wassenaar Rule. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

In their haste to deliver software capabilities, developers sometimes engage in less-than-optimal coding practices. If not addressed, these shortcuts can ultimately yield unexpected rework costs that offset the benefits of rapid delivery. Technical debt conceptualizes the tradeoff between the short-term benefits of rapid delivery and long-term value. Taking shortcuts to expedite the delivery of features in the short term incurs technical debt, analogous to financial debt, that must be paid off later to optimize long-term success. Managing technical debt is an increasingly critical aspect of producing cost-effective, timely, and high-quality software products, especially in projects that apply agile methods.

This is the second installment of two blog posts highlighting recommended practices for developing safety-critical systems that was originally published on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. The first postin the series by Peter Feiler, Julien Delange, and Charles Weinstock explored challenges to developing safety critical systems and presented the first three practices:

  1. Use quality attribute scenarios and mission-tread analyses to identify safety-critical requirements.
  2. Specify safety-critical requirements, and prioritize them.
  3. Conduct hazard and static analyses to guide architectural and design decisions.

Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management. In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget.

Using the Architecture Analysis & Design Language (AADL) modeling notation early in the development process not only helps the development team detect design errors before implementation, but also supports implementation efforts and produces high-quality code. Our recent blog posts and webinar have shown how AADL can identify potential design errors and help avoid propagating them through the development process, where remediation can require massive re-engineering, delay the schedule, and increase costs.

Legacy systems represent a massive operations and maintenance (O&M) expense. According to a recent study, 75 percent of North American and European enterprise information technology (IT) budgets are expended on ongoing O&M, leaving a mere 25 percent for new investments. Another study found nearly three quarters of the U.S. federal IT budget is spent supporting legacy systems. For decades, the Department of Defense (DoD) has been attempting to modernize about 2,200 business systems, which are supported by billions of dollars in annual expenditures that are intended to support business functions and operations.

In Department of Defense (DoD) programs, cooperation among software and system components is critical. A system of systems (SoS) is used to accomplish a number of missions where cooperation among individual systems is critical to providing (new) capabilities that the systems could not provide. SoS capabilities are a major driver in the architecture of the SoS and selection of constituent systems for the SoS. There are additional critical drivers, however, that must be accounted for in the architecture that significantly impact the behavior of the SoS capabilities, as well as the development and sustainment of the SoS and its constituent systems' architectures. These additional drivers are the quality attributes, such as performance, availability, scalability, security, usability, testability, safety, training, reusability, interoperability, and maintainability. This blog post, the first in a series, introduces the Mission Thread Workshop (MTW), and describes the role that it plays in assisting SoS programs to elicit and refine end-to-end SoS mission threads augmented with quality attribute considerations.

Mismatched assumptions about hardware, software, and their interactions often result in system problems detected too late in the development lifecycle, which is an expensive and potentially dangerous situation for developers and users of mission- and safety-critical technologies. To address this problem, the Society of Automotive Engineers (SAE) released the aerospace standard AS5506, named the Architecture Analysis & Design Language (AADL). The AADL standard,defines a modeling notation based on a textual and graphic representation used by development organizations to conduct lightweight, rigorous--yet comparatively inexpensive--analyses of critical real-time factors, such as performance, dependability, security, and data integrity.

Over the years, software architects and developers have designed many methods and metrics to evaluate software complexity and its impact on quality attributes, such as maintainability, quality, and performance. Existing studies and experiences have shown that highly complex systems are harder to understand, maintain, and upgrade. Managing software complexity is therefore useful, especially for software that must be maintained for many years.

Given that up to 70 percent of system errors are introduced during the design phase, stakeholders need a modeling language that will ensure both requirements enforcement during the development process and the correct implementation of these requirements. Previous work demonstrates that using the Architecture Analysis & Design Language (AADL) early in the development process not only helps detect design errors before implementation, but also supports implementation efforts and produces high-quality code. Our latest blog posts and a recent webinarhave shown how AADL can identify potential design errors and avoid propagating them through the development process. Verified specifications, however, are still implemented manually.

Continuous delivery practices, popularized in Jez Humble's 2010 book Continuous Delivery, enable rapid and reliable software system deployment by emphasizing the need for automated testing and building, as well as closer cooperation between developers and delivery teams. As part of the Carnegie Mellon University Software Engineering Institute's (SEI) focus on Agile software development, we have been researching ways to incorporate quality attributes into the short iterations common to Agile development.

The term big data is a subject of much hype in both government and business today. Big data is variously the cause of all existing system problems and, simultaneously, the savior that will lead us to the innovative solutions and business insights of tomorrow. All this hype fuels predictions such as the one from IDC that the market for big data will reach $16.1 billion in 2014, growing six times faster than the overall information technology market, despite the fact that the "benefits of big data are not always clear today," according to IDC. From a software-engineering perspective, however, the challenges of big data are very clear, since they are driven by ever-increasing system scale and complexity. This blog post, a continuation of my last poston the four principles of building big data systems, describes how we must address one of these challenges, namely, you can't manage what you don't monitor.

In earlier posts on big data, I have written about how long-held design approaches for software systems simply don't work as we build larger, scalable big data systems. Examples of design factors that must be addressed for success at scale include the need to handle the ever-present failures that occur at scale, assure the necessary levels of availability and responsiveness, and devise optimizations that drive down costs. Of course, the required application functionality and engineering constraints, such as schedule and budgets, directly impact the manner in which these factors manifest themselves in any specific big data system. In this post, the latest in my ongoing series on big data, I step back from specifics and describe four general principles that hold for any scalable, big data system. These principles can help architects continually validate major design decisions across development iterations, and hence provide a guide through the complex collection of design trade-offs all big data systems require.

In the first half of this year, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in big data, secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In the first six months of 2014 (through June 20), the SEI blog has logged 60,240 visits, which is nearly comparable with the entire 2013 yearly total of 66,757 visits. As we reach the mid-year point, this blog posting takes a look back at our most popular areas of work (at least according to you, our readers) and highlights our most popular blog posts for the first half of 2014, as well as links to additional related resources that readers might find of interest.

Introducing new software languages, tools, and methods in industrial and production environments incurs a number of challenges. Among other necessary changes, practices must be updated, and engineers must learn new methods and tools. These updates incur additional costs, so transitioning to a new technology must be carefully evaluated and discussed. Also, the impact and associated costs for introducing a new technology vary significantly by type of project, team size, engineers' backgrounds, and other factors, so that it is hard to estimate the real acquisition costs. A previous post in our ongoing series on the Architecture Analysis and Design Language (AADL) described the use of AADL in research projects (such as System Architectural Virtual Integration (SAVI)) in which experienced researchers explored the language capabilities to capture and analyze safety-critical systems from different perspectives. These successful projects have demonstrated the accuracy of AADL as a modeling notation. This blog post presents research conducted independently of the SEI that aims to evaluate the safety concerns of several unmanned aerial vehicle (UAV) systems using AADL and the SEI safety analysis tools implemented in OSATE.

The Architecture Analysis and Design Language (AADL) is a modeling language that, at its core, allows designers to specify the structure of a system (components and connections) and analyze its architecture. From a security point of view, for example, we can use AADL to verify that a high-security component does not communicate with a low-security component and, thus, ensure that one type of security leak is prevented by the architecture. The ability to capture the behavior of a component allows for even better use of the model. This blog post describes a tool developed to support the AADL Behavior Annex and allow architects to import behavior from Simulink (or potentially any other notation) into an architecture model.

To view a video of the introduction, please click here.
The Better Buying Power 2.0 initiative is a concerted effort by the United States Department of Defense to achieve greater efficiencies in the development, sustainment, and recompetition of major defense acquisition programs through cost control, elimination of unproductive processes and bureaucracy, and promotion of open competition. This SEI blog posting describes how the Navy is operationalizing Better Buying Power in the context of their Open Systems Architecture and Business Innovation initiatives.

Although software is increasingly important to the success of government programs, there is often little consideration given to its impact on early key program decisions. The Carnegie Mellon University Software Engineering Institute (SEI) is conducting a multi-phase research initiative aimed at answering the question: is the probability of a program's success improved through deliberately producing a program acquisition strategy and software architecture that are mutually constrained and aligned?

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in systems of systems integration from an architectural perspective, unintentional insider threat that derives from social engineering, identifying physical security gaps in international mail processing centers and similar facilities, countermeasures used by cloud service providers, the Team Software Process (TSP), and key automation and analysis techniques. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

The process of designing and analyzing software architectures is complex. Architectural design is a minimally constrained search through a vast multi-dimensional space of possibilities. The end result is that architects are seldom confident that they have done the job optimally, or even satisfactorily. Over the past two decades, practitioners and researchers have used architectural patterns to expedite sound software design. Architectural patterns are prepackaged chunks of design that provide proven structural solutions for achieving particular software system quality attributes, such as scalability or modifiability. While use of patterns has simplified the architectural design process somewhat, key challenges remain. This blog explores these challenges and our solutions for achieving system security qualities through use of patterns.

Many types of software systems, including big data applications, lend them themselves to highly incremental and iterative development approaches. In essence, system requirements are addressed in small batches, enabling the delivery of functional releases of the system at the end of every increment, typically once a month. The advantages of this approach are many and varied. Perhaps foremost is the fact that it constantly forces the validation of requirements and designs before too much progress is made in inappropriate directions. Ambiguity and change in requirements, as well as uncertainty in design approaches, can be rapidly explored through working software systems, not simply models and documents. Necessary modifications can be carried out efficiently and cost-effectively through refactoring before code becomes too 'baked' and complex to easily change. This posting, the second in a series addressing the software engineering challenges of big data, explores how the nature of building highly scalable, long-lived big data applications influences iterative and incremental design approaches.

As part of our mission to advance the practice of software engineering and cybersecurity through research and technology transition, our work focuses on ensuring that software-reliant systems are developed and operated with predictable and improved quality, schedule, and cost. To achieve this mission, the SEI conducts research and development activities involving the Department of Defense (DoD), federal agencies, industry, and academia. As we look back on 2013, this blog posting highlights our many R&D accomplishments during the past year.

As the pace of software delivery increases, organizations need guidance on how to deliver high-quality software rapidly, while simultaneously meeting demands related to time-to-market, cost, productivity, and quality. In practice, demands for adding new features or fixing defects often take priority. However, when software developers are guided solely by project management measures, such as progress on requirements and defect counts, they ignore the impact of architectural dependencies, which can impede the progress of a project if not properly managed. In previous posts on this blog, my colleague Ipek Ozkaya and I have focused on architectural technical debt, which refers to the rework and degraded quality resulting from overly hasty delivery of software capabilities to users. This blog post describes a first step towards an approach we developed that aims to use qualitative architectural measures to better inform quantitative code quality metrics.

Safety-critical avionics, aerospace, medical, and automotive systems are becoming increasingly reliant on software. Malfunctions in these systems can have significant consequences including mission failure and loss of life. So, they must be designed, verified, and validated carefully to ensure that they comply with system specifications and requirements and are error free. In the automotive domain, for example, cars contain many electronic control units (ECU)--today's standard vehicle can contain up to 30 ECUs--that communicate to control systems such as airbag deployment, anti-lock brakes, and power steering.

To deliver enhanced integrated warfighting capability at lower cost across the enterprise and over the lifecycle, the Department of Defense (DoD) must move away from stove-piped solutions and towards a limited number of technical reference frameworks based on reusable hardware and software components and services. There have been previous efforts in this direction, but in an era of sequestration and austerity, the DoD has reinvigorated its efforts to identify effective methods of creating more affordable acquisition choices and reducing the cycle time for initial acquisition and new technology insertion. This blog posting is part of an ongoing series on how acquisition professionals and system integrators can apply Open Systems Architecture (OSA)practices to decompose large monolithic business and technical designs into manageable, capability-oriented frameworks that can integrate innovation more rapidly and lower total ownership costs. The focus of this posting is on the evolution of DoD combat systems from ad hoc stovepipes to more modular and layered architectures.

The size and complexity of aerospace software systems has increased significantly in recent years. When looking at source lines of code (SLOC), the size of systems has doubled every four years since the mid 1990s, according to a recent SEI technical report. The 27 million SLOC that will be produced from 2010 to 2020 is expected to exceed $10 billion. These increases in size and cost have also been accompanied by significant increases in errors and rework after a system has been deployed. Mismatched assumptions between hardware, software, and their interactions often result in system problems that are detected only after the system has been deployed when rework is much more expensive to complete.

New data sources, ranging from diverse business transactions to social media, high-resolution sensors, and the Internet of Things, are creating a digital tidal wave of big data that must be captured, processed, integrated, analyzed, and archived. Big datasystems storing and analyzing petabytes of data are becoming increasingly common in many application areas. These systems represent major, long-term investments requiring considerable financial commitments and massive scale software and system deployments.

When life- and safety-critical systems fail (and this happens in many domains), the results can be dire, including loss of property and life. These types of systems are increasingly prevalent, and can be found in the altitude and control systems of a satellite, the software-reliant systems of a car (such as its cruise control and anti-lock braking system), or medical devices that emit radiation. When developing such systems, software and systems architects must balance the need for stability and safety with stakeholder demands and time-to-market constraints. The Architectural Analysis & Design Language (AADL) helps software and system architects address the challenges of designing life- and safety-critical systems by providing a modeling notation with well-defined real-time and architectural semantics that employ textual and graphic representations. This blog posting, part of an ongoing series on AADL, focuses on the initial foundations of AADL.

Agile projects with incremental development lifecycles are showing greater promise in enabling organizations to rapidly field software compared to waterfall projects. There is a lack of clarity, however, regarding the factors that constitute and contribute to success of Agile projects. A team of researchers from Carnegie Mellon University's Software Engineering Institute, including Ipek Ozkaya, Robert Nord, and myself, interviewed project teams with incremental development lifecycles from five government and commercial organizations. This blog posting summarizes the findings from this study to understand key success and failure factors for rapid fielding on their projects.

Department of Defense (DoD) program managers and associated acquisition professionals are increasingly called upon to steward the development of complex, software-reliant combat systems. In today's environment of expanded threats and constrained resources (e.g., sequestration), their focus is on minimizing the cost and schedule of combat-system acquisition, while simultaneously ensuring interoperability and innovation. A promising approach for meeting these challenging goals is Open Systems Architecture (OSA), which combines (1) technical practices designed to reduce the cycle time needed to acquire new systems and insert new technology into legacy systems and (2) business models for creating a more competitive marketplace and a more effective strategy for managing intellectual property rights in DoD acquisition programs. This blog posting expands upon our earlier coverage of how acquisition professionals and system integratorscan apply OSA practices to decompose large monolithic business and technical designs into manageable, capability-oriented frameworks that can integrate innovation more rapidly and lower total ownership costs.

When life- and safety-critical systems fail, the results can be dire, including loss of property and life. These types of systems are increasingly prevalent, and can be found in the altitude and control systems of a satellite, the software-reliant systems of a car (such as its cruise control and GPS), or a medical device. When developing such systems, software and systems architects must balance the need for stability and safety with stakeholder demands and time-to-market constraints. The Architectural Analysis & Design Language (AADL) helps software and system architects address the challenges of designing life- and safety-critical systems by providing a modeling notation that employs textual and graphic representations. This blog posting, part of an ongoing series on AADL, describes how AADL is being used in medical devices and highlights the experiences of a practitioner whose research aims to address problems with medical infusion pumps.

Aircraft and other safety-critical systems increasingly rely on software to provide their functionality. The exponential growth of software in safety-critical systems has pushed the cost for building aircraft to the limit of affordability. Given this increase, the current practice of build-then-test is no longer feasible. This blog posting describes recent work at the SEI to improve the quality of software-reliant systems through an approach known as the Reliability Validation and Improvement Frameworkthat will lead to early defect discovery and incremental end-to-end validation.

Software and systems architects face many challenges when designing life- and safety-critical systems, such as the altitude and control systems of a satellite, the auto pilot system of a car, or the injection system of a medical infusion pump. Architects in software and systems answer to an expanding group of stakeholders and often must balance the need to design a stable system with time-to-market constraints. Moreover, no matter what programming language architects choose, they cannot design a complete system without an appropriate tool environment that targets user requirements. A promising tool environment is the Architecture Analysis and Design Language (AADL), which is a modeling notation that employs both textual and graphical representations. This post, the second in a series on AADL, provides an overview of existing AADL tools and highlights the experience of researchers and practitioners who are developing and applying AADL tools to production projects.

In launching the SEI blog two years ago, one of our top priorities was to advance the scope and impact of SEI research and development projects, while increasing the visibility of the work by SEI technologists who staff these projects. After 114 posts, and 72,608 visits from readers of our blog, this post reflects on some highlights from the last two years and gives our readers a preview of posts to come.

When a system fails, engineers too often focus on the physical components, but pay scant attention to the software. In software-reliant systems ignoring or deemphasizing the importance of software failures can be a recipe for disaster. This blog post is the first in a series on recent developments with the Architecture Analysis Design Language (AADL) standard. Future posts will explore recent tools and projects associated with AADL, which provides formal modeling concepts for the description and analysis of application systems architecture in terms of distinct components and their interactions. As this series will demonstrate, the use of AADL helps alleviate mismatched assumptions between the hardware, software, and their interactions that can lead to system failures.

The Department of Defense (DoD) has become deeply reliant on software. As a federally funded research and development center (FFRDC), the SEI is chartered to work with the DoD to meet the challenges of designing, producing, assuring, and evolving software-reliant systems in an affordable and dependable manner. This blog post is the second in a multi-part series that describes key elements of our forthcoming Strategic Research Plan that address these challenges through research, acquisition support, and collaboration with the DoD, other federal agencies, industry, and academia.

It's undeniable that the field of software architecture has grown during the past 20 years. In 2010, CNN/Money magazine identified "software architect" as the most desirable job in the U.S. Since 2004, the SEI has trained people from more than 900 organizations in the principles and practices of software architecture, and more than 1,800 people have earned the SEI Software Architecture Professional certificate. It is widely recognized today that architecture serves as the blueprint for both the system and the project developing it, defining the work assignments that must be performed by design and implementation teams. Architecture is the primary purveyor of system quality attributes, which are hard to achieve without a unifying architecture; it's also the conceptual glue that holds every phase of projects together for their many stakeholders. This blog posting--the final installment in a series--provides lightly edited transcriptions of presentations by Jeromy Carriere and Ian Gorton at a SATURN 2012 roundtable, "Reflections on 20 Years of Software Architecture."

The Department of Defense (DoD) has become deeply and fundamentally reliant on software. As a federally funded research and development center (FFRDC), the SEI is chartered to work with the DoD to meet the challenges of designing, producing, assuring, and evolving software-reliant systems in an affordable and dependable manner. This blog post--the first in a multi-part series--outlines key elements of the forthcoming SEI Strategic Research Plan that addresses these challenges through research and acquisition support and collaboration with DoD, other federal agencies, industry, and academia.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today's post by Paulo Merson, a senior member of the technical staff in the SEI's Research, Technology, and System Solutions Program, is from the SATURN Network blog. This post explores Merson's experience using Checkstyle and pre-commit hooks on Subversion to verify the conformance between code and architecture.

It is widely recognized today that software architecture serves as the blueprint for both the system and the project developing it, defining the work assignments that must be performed by design and implementation teams. Architecture is the primary purveyor of system quality attributes that are hard to achieve without a unifying architecture; it's also the conceptual glue that holds every phase of projects together for their many stakeholders. Last month, we presented two posting in a seriesfrom a panel at SATURN 2012 titled "Reflections on 20 Years of Software Architecture" that discussed the increased awareness of architecture as a primary means for achieving desired quality attributes and advances in software architecture practice for distributed real-time embedded systems during the past two decades.

Last week, we presented the first posting in a series from a panel at SATURN 2012 titled "Reflections on 20 Years of Software Architecture." In her remarks on the panel summarizing the evolution of software architecture work at the SEI, Linda Northrop, director of the SEI's Research, Technology, and System Solutions (RTSS) Program, referred to the steady growth in system scale and complexity over the past two decades and the increased awareness of architecture as a primary means for achieving desired quality attributes, such as performance, reliability, evolvability, and security.

A search on the term "software architecture" on the web as it existed in 1992 yielded 88,700 results. In May, during a panel providing a 20-year retrospective on software architecture hosted at the SEI Architecture Technology User Network (SATURN) conference, moderator Rick Kazman noted that on the day of the panel discussion--May 9, 2012-- that same search yielded 2,380,000 results. This 30-fold increase stems from various factors, including the steady growth in system complexity, the increased awareness of the importance of software architecture on system quality attributes, and the quality and impact of efforts by the SEI and other groups conducting research and transition activities on software architecture. This blog posting--the first in a series--provides a lightly edited transcription of the presentation of the first panelist, Linda Northrop, director of the SEI's Research, Technology, & System Solutions (RTSS) Programat the SEI, who provided an overview of the evolution of software architecture work at the SEI during the past twenty years.

For more than 10 years, scientists, researchers, and engineers used the TeraGrid supercomputer network funded by the National Science Foundation (NSF) to conduct advanced computational science. The SEI has joined a partnership of 17 organizations and helped develop the successor to the TeraGrid called the Extreme Science and Engineering Discovery Environment (XSEDE). This posting, which is the first in a multi-part series, describes our work on XSEDE that allows researchers open access--directly from their desktops--to the suite of advanced computational tools and digital resources and services provided via XSEDE. This series is not so much concerned with supercomputers and supercomputing middleware, but rather with the nature of software engineering practice at the scale of the socio-technical ecosystem.

Major acquisition programs increasingly rely on software to provide substantial portions of system capabilities. All too often, however, software is not considered when the early, most constraining program decisions are made. SEI researchers have identified misalignments between software architecture and system acquisition strategies that lead to program restarts, cancellations, and failures to meet important missions or business goals. This blog posting--the second installment in a two-part series--builds on the discussions in part one by introducing several patterns of misalignment--known as anti-patterns--that we've identified in our research and discussing how these anti-patternsare helping us create a new method for aligning software architecture and system acquisition strategies to reduce project failure.

Major acquisition programs increasingly rely on software to provide substantial portions of system capabilities. Not surprisingly, therefore, software issues are driving system cost and schedule overruns. All too often, however, software is not even a consideration when the early, most constraining program decisions are made. Through analysis of troubled programs, SEI researchers have identified misalignments between software architecture and system acquisition strategies that lead to program restarts, cancellations, and failures to meet important missions or business goals. To address these misalignments, the SEI is conducting new research on enabling organizations to reduce program failures by harmonizing their acquisition strategy with their software architecture.

While agile methods have become popular in commercial software development organizations, the engineering disciplines needed to apply agility to mission-critical, software-reliant systems are not as well defined or practiced. To help bridge this gap, the SEI recently hosted the Agile Research Forum. The event brought together researchers and practitioners from around the world to discuss when and how to best apply agile methods in mission-critical environments found in government and many industries. This blog posting, the third installment in a multi-part series highlighting research presented during the forum, summarizes a presentation made during the forum by Ipek Ozkaya, a senior researcher in the SEI's Research, Technology & System Solutions program, who discussed the use of agile architecture practices to manage strategic, intentional technical debt.

The extent of software in Department of Defense (DoD) systems has increased by more than an order of magnitude every decade. This is not just because there are more systems with more software; a similar growth pattern has been exhibited within individual, long-lived military systems. In recognition of this growing software role, the Director of Defense Research and Engineering (DDR&E, now ASD(R&E)) requested the National Research Council (NRC) to undertake a study of defense software producibility, with the purpose of identifying the principal challenges and developing recommendations regarding both improvement to practice and priorities for research.

Happy Memorial Day. As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in architecture analysis, patterns for insider threat monitoring, source code analysis and insider threat security reference architecture. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Common operating platform environments (COPEs) are reusable software infrastructures that incorporate open standards; define portable interfaces, interoperable protocols, and data models; offer complete design disclosure; and have a modular, loosely coupled, and well-articulated software architecture that provides applications and end users with many shared capabilities. COPEs can help reduce recurring engineering costs, as well as enable developers to build better and more powerful applications atop a COPE, rather than wrestling repeatedly with tedious and error-prone infrastructure concerns.

Mission-critical operations in the Department of Defense (DoD) increasingly depend on complex software-reliant systems-of-systems (abbreviated as "systems" below). These systems are characterized by a rapidly growing number of connected platforms, sensors, decision nodes, and people. While facing constrained budget, expanded threat, and engineering workforce challenges, the DoD is trying to obtain greater efficiency and productivity in defense spending needed to acquire and sustain these systems. This blog posting--the first in a three-part series--motivates the need for DoD common operating platform environmentsthat can help collapse today's stove-piped solutions to decrease costs, spur innovation, and increase acquisition and operational performance.

New acquisition guidelines from the Department of Defense (DoD) aimed at reducing system lifecycle time and effort are encouraging the adoption of Agile methods. There is a general lack, however, of practical guidance on how to employ Agile methods effectively for DoD acquisition programs. This blog posting describes our research on providing software and systems architects with a decision making framework for reducing integration risk with Agile methods, thereby reducing the time and resources needed for related work.

As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in insider threat, interoperability, service-oriented architecture, operational resilience, and automated remediation. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Managing technical debt, which refers to the rework and degraded quality resulting from overly hasty delivery of software capabilities to users, is an increasingly critical aspect of producing cost-effective, timely, and high-quality software products. A delicate balance is needed between the desire to release new software capabilities rapidly to satisfy users and the desire to practice sound software engineering that reduces rework.

A key mission of the SEI is to advance the practice of software engineering and cyber security through research and technology transition to ensure the development and operation of software-reliant Department of Defense (DoD) systems with predictable and improved quality, schedule, and cost. To achieve this mission, the SEI conducts research and development (R&D) activities involving the DoD, federal agencies, industry, and academia. One of my initial blog postings summarized the new and upcoming R&D activitieswe had planned for 2011. Now that the year is nearly over, this blog posting presents some of the many R&D accomplishments we completed in 2011.

This post is the second installment in a two-part series describing our recent engagement with Bursatec to create a reliable and fast new trading system for Groupo Bolsa Mexicana de Valores (BMV, the Mexican Stock Exchange). This project combined elements of the SEI's Architecture Centric Engineering (ACE) method, which requires effective use of software architecture to guide system development, with its Team Software Process (TSP), which is a team-centric approach to developing software that enables organizations to better plan and measure their work and improve software development productivity to gain greater confidence in quality and cost estimates. The first postexamined how ACE was applied within the context of TSP. This posting focuses on the development of the system architecture for Bursatec within the TSP framework.

Bursatec, the technology arm of Groupo Bolsa Mexicana de Valores (BMV, the Mexican Stock Exchange), recently embarked on a project to replace three existing trading engines with one system developed in house. Given the competitiveness of global financial markets and recent interest in Latin American economies, Bursatec needed a reliable and fast new system that could work ceaselessly throughout the day and handle sharp fluctuations in trading volume. To meet these demands, the SEI suggested combining elements of its Architecture Centric Engineering (ACE) method, which requires effective use of software architecture to guide system development, with its Team Software Process (TSP), which teaches software developers the skills they need to make and track plans and produce high-quality products. This posting--the first in a two-part series--describes the challenges Bursatec faced and outlines how working with the SEI and combining ACE with TSP helped them address those challenges.

Happy Labor Day from all of us here at the SEI. I'd like to take advantage of this special occasion to keep you apprised of some recent technical reports and notes from the SEI. It's part of an ongoing effort to keep you informed about our latest work. These reports highlight the latest work of SEI technologists in architecting service-oriented systems, operational resilience, standards-based automated remediation, and acquisition. This post includes a listing of each report, author/s, and links where the published reports can be accessed on the SEI website.

Testing plays a critical role in the development of software-reliant systems. Even with the most diligent efforts of requirements engineers, designers, and programmers, faults inevitably occur. These faults are most commonly discovered and removed by testing the system and comparing what it does to what it is supposed to do. This blog posting summarizes a method that improves testing outcomes (including efficacy and cost) in a software-reliant system by using an architectural design approach, which describes a coherent set of architectural decisions taken by architects to help meet the behavioral and quality attribute requirements of systems being developed.

Software sustainment is growing in importance as the inventory of DoD systems continues to age and greater emphasis is placed on efficiency and productivity in defense spending. In part 1 of this series, I summarized key software sustainment challenges facing the DoD. In this blog posting, I describe some of the R&D activities conducted by the SEI to address these challenges.

Department of Defense (DoD) programs have traditionally focused on the software acquisition phase (initial procurement, development, production, and deployment) and largely discounted the software sustainment phase (operations and support) until late in the lifecycle. The costs of software sustainment are becoming too high to discount since they account for 60 to 90 percent of the total software lifecycle effort.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today's post is from the SATURN Network blog by Nanette Brown, a senior member of the technical staff in the SEI's Research, Technology, and System Solutions program. This post, the third in a series on lean principles and architecture, continues the discussion on the eight types of waste identified in Lean manufacturing and how these types of waste manifst themselves in software development. The focus of this post is on mapping the waste of motion and the waste of transportation from manufacturing to the waste of information transformation in software development.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today's post is from the SATURN Network blog by Nanette Brown, a visiting scientist in the SEI's Research, Technology, and System Solutions program. This post, the second in a series on lean principles and architecture, takes an in-depth look at the waste of waiting and how it is an important aspect of the economics of architecture decision making.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today's post is from the SATURN Network blog by Nanette Brown, a visiting scientist in the SEI's Research, Technology, and System Solutions program. This post explores Categories of Waste in Lean Principles and Architecture, and takes an in-depth look at three of the eight categories of waste (defects, overproduction, and extra complexity) from the perspective of software development in general and software architecture in particular.

The SEI has long advocated software architecture documentation as a software engineering best practice. This type of documentation is not particularly revolutionary or different from standard practices in other engineering disciplines. For example, who would build a skyscraper without having an architect draw up plans first? The specific value of software architecture documentation, however, has never been established empirically. This blog describes a research project we are conducting to measure and understand the value of software architecture documentation on complex software-reliant systems.

As industry and government customers demand increasingly rapid innovation and the ability to adapt products and systems to emerging needs, the time frames for releasing new software capabilities continue to shorten. Likewise, Agile software development processes, with their emphasis on releasing new software capabilities rapidly, are increasing in popularity beyond their initial small team and project context. Practices intended to speed up the delivery of value to users, however, often result in high rework costs that ultimately offset the benefits of faster delivery, especially when good engineering practices are forgotten along the way. This rework and degrading quality often is referred to as technical debt. This post describes our research on improving the overall value delivered to users by strategically managing technical debt, which involves decisions made to defer necessary work during the planning or execution of a software project.

This is a second in a series of posts focusing on Agile software development. In the first post, "What is Agile?" we provided a short overview of the key elements of the Agile approach, and we introduced the Agile Manifesto. One of the guiding principles from the manifesto emphasizes valuing people over developing processes. While the manifesto clearly alludes to the fact that too much focus on process (and not results) can be a bad thing, we introduce the notion here that the other end of the spectrum can also be bad. This blog explores the level of skill that is needed to develop software using Agile (do you need less skill or more?), as well as the importance of maintaining strong competency in a core set of software engineering processes.