Archive: 2013-01

As part of our mission to advance the practice of software engineering and cybersecurity through research and technology transition, our work focuses on ensuring that software-reliant systems are developed and operated with predictable and improved quality, schedule, and cost. To achieve this mission, the SEI conducts research and development activities involving the Department of Defense (DoD), federal agencies, industry, and academia. As we look back on 2013, this blog posting highlights our many R&D accomplishments during the past year.

Hacking the CERT FOE

By on in

Occasionally this blog will highlight different posts from the SEI blogosphere. Today we are highlighting a recent post by Will Dormann, a senior member of the technical staff in the SEI's CERT Division, from the CERT/CC Blog. In this post, Dormann describes how to modify the CERT Failure Observation Engine (FOE),when he encounters apps that "don't play well" with the FOE. The FOE is a software testing tool that finds defects in applications running on the Windows platform.

As the pace of software delivery increases, organizations need guidance on how to deliver high-quality software rapidly, while simultaneously meeting demands related to time-to-market, cost, productivity, and quality. In practice, demands for adding new features or fixing defects often take priority. However, when software developers are guided solely by project management measures, such as progress on requirements and defect counts, they ignore the impact of architectural dependencies, which can impede the progress of a project if not properly managed. In previous posts on this blog, my colleague Ipek Ozkaya and I have focused on architectural technical debt, which refers to the rework and degraded quality resulting from overly hasty delivery of software capabilities to users. This blog post describes a first step towards an approach we developed that aims to use qualitative architectural measures to better inform quantitative code quality metrics.

Safety-critical avionics, aerospace, medical, and automotive systems are becoming increasingly reliant on software. Malfunctions in these systems can have significant consequences including mission failure and loss of life. So, they must be designed, verified, and validated carefully to ensure that they comply with system specifications and requirements and are error free. In the automotive domain, for example, cars contain many electronic control units (ECU)--today's standard vehicle can contain up to 30 ECUs--that communicate to control systems such as airbag deployment, anti-lock brakes, and power steering.

Government agencies, including the departments of Defense, Veteran Affairs, and Treasury, are being asked by their government program office to adopt Agile methods. These are organizations that have traditionally utilized a "waterfall" life cycle model (as epitomized by the engineering "V" charts). Programming teams in these organizations are accustomed to being managed via a series of document-centric technical reviews that focus on the evolution of the artifacts that describe the requirements and design of the system rather than its evolving implementation, as is more common with Agile methods. Due to these changes, many struggle to adopt Agile practices. For example, acquisition staff often wonder how to fit Agile measurement practices into their progress tracking systems.

To deliver enhanced integrated warfighting capability at lower cost across the enterprise and over the lifecycle, the Department of Defense (DoD) must move away from stove-piped solutions and towards a limited number of technical reference frameworks based on reusable hardware and software components and services. There have been previous efforts in this direction, but in an era of sequestration and austerity, the DoD has reinvigorated its efforts to identify effective methods of creating more affordable acquisition choices and reducing the cycle time for initial acquisition and new technology insertion. This blog posting is part of an ongoing series on how acquisition professionals and system integrators can apply Open Systems Architecture (OSA)practices to decompose large monolithic business and technical designs into manageable, capability-oriented frameworks that can integrate innovation more rapidly and lower total ownership costs. The focus of this posting is on the evolution of DoD combat systems from ad hoc stovepipes to more modular and layered architectures.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in Secure Java and Android Coding, Cybersecurity Capability Measurement, Managing Insider Threat, the CERT Resilience Management Model, Network Situational Awareness, and Security and Survivability. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

The verification and validation of requirements are a critical part of systems and software engineering. The importance of verification and validation (especially testing) is a major reason that the traditional waterfall development cycle underwent a minor modification to create the V modelthat links early development activities to their corresponding later testing activities. This blog post introduces three variants on the V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.

In early 2012, a backdoor Trojan malware named Flame was discovered in the wild. When fully deployed, Flame proved very hard for malware researchers to analyze. In December of that year, Wired magazine reported that before Flame had been unleashed, samples of the malware had been lurking, undiscovered, in repositories for at least two years. As Wired also reported, this was not an isolated event. Every day, major anti-virus companies and research organizations are inundated with new malware samples.

The size and complexity of aerospace software systems has increased significantly in recent years. When looking at source lines of code (SLOC), the size of systems has doubled every four years since the mid 1990s, according to a recent SEI technical report. The 27 million SLOC that will be produced from 2010 to 2020 is expected to exceed $10 billion. These increases in size and cost have also been accompanied by significant increases in errors and rework after a system has been deployed. Mismatched assumptions between hardware, software, and their interactions often result in system problems that are detected only after the system has been deployed when rework is much more expensive to complete.

Analyzing Routing Tables

By on in

Occasionally this blog will highlight different posts from the SEI blogosphere. Today we are highlighting a post from the CERT/CC Blog by Timur Snoke, a member of the technical staff in the SEI's CERT Division. This post describes maps that Timur has developed using Border Gateway Protocol (BGP) routing tables to show the evolution of public-facing autonomous system numbers (ASN). These maps help analysts inspect the BPG routing tables to reveal disruptions to an organization's infrastructure. They also help analysts glean geopolitical information for an organization, country, or a city-state, which helps them identify how and when network traffic is subverted to travel nefarious alternative paths to place communications deliberately at risk.

New data sources, ranging from diverse business transactions to social media, high-resolution sensors, and the Internet of Things, are creating a digital tidal wave of big data that must be captured, processed, integrated, analyzed, and archived. Big datasystems storing and analyzing petabytes of data are becoming increasingly common in many application areas. These systems represent major, long-term investments requiring considerable financial commitments and massive scale software and system deployments.

When life- and safety-critical systems fail (and this happens in many domains), the results can be dire, including loss of property and life. These types of systems are increasingly prevalent, and can be found in the altitude and control systems of a satellite, the software-reliant systems of a car (such as its cruise control and anti-lock braking system), or medical devices that emit radiation. When developing such systems, software and systems architects must balance the need for stability and safety with stakeholder demands and time-to-market constraints. The Architectural Analysis & Design Language (AADL) helps software and system architects address the challenges of designing life- and safety-critical systems by providing a modeling notation with well-defined real-time and architectural semantics that employ textual and graphic representations. This blog posting, part of an ongoing series on AADL, focuses on the initial foundations of AADL.

Agile projects with incremental development lifecycles are showing greater promise in enabling organizations to rapidly field software compared to waterfall projects. There is a lack of clarity, however, regarding the factors that constitute and contribute to success of Agile projects. A team of researchers from Carnegie Mellon University's Software Engineering Institute, including Ipek Ozkaya, Robert Nord, and myself, interviewed project teams with incremental development lifecycles from five government and commercial organizations. This blog posting summarizes the findings from this study to understand key success and failure factors for rapid fielding on their projects.

Exclusively technical approaches toward attaining cyber security have created pressures for malware attackers to evolve technical sophistication and harden attacks with increased precision, including socially engineered malware and distributed denial of service (DDoS) attacks. A general and simple design for achieving cybersecurity remains elusive and addressing the problem of malware has become such a monumental task that technological, economic, and social forces must join together to address this problem. At the Carnegie Mellon University Software Engineering Institute's CERT Division, we are working to address this problem through a joint collaboration with researchers at the Courant Institute of Mathematical Sciences at New York University led by Dr. Bud Mishra. This blog post describes this research, which aims to understand and seek complex patterns in malicious use cases within the context of security systems and develop an incentives-based measurement system that would evaluate software and ensure a level of resilience to attack.

The power and speed of computers have increased exponentially in recent years. Recently, however, modern computer architectures are moving away from single-core and multi-core (homogenous) central processing units (CPUs) to many-core (heterogeneous) CPUs. This blog post describes research I've undertaken with my colleagues at the Carnegie Mellon Software Engineering Institute (SEI)--including colleagues Jonathan Chu and Scott McMillan of the Emerging Technology Center (ETC) as well as Alex Nicoll, a researcher with the SEI's CERT Division--to create a software library that can exploit the heterogeneous parallel computers of the future and allow developers to create systems that are more efficient in terms of computation and power consumption.

Department of Defense (DoD) program managers and associated acquisition professionals are increasingly called upon to steward the development of complex, software-reliant combat systems. In today's environment of expanded threats and constrained resources (e.g., sequestration), their focus is on minimizing the cost and schedule of combat-system acquisition, while simultaneously ensuring interoperability and innovation. A promising approach for meeting these challenging goals is Open Systems Architecture (OSA), which combines (1) technical practices designed to reduce the cycle time needed to acquire new systems and insert new technology into legacy systems and (2) business models for creating a more competitive marketplace and a more effective strategy for managing intellectual property rights in DoD acquisition programs. This blog posting expands upon our earlier coverage of how acquisition professionals and system integratorscan apply OSA practices to decompose large monolithic business and technical designs into manageable, capability-oriented frameworks that can integrate innovation more rapidly and lower total ownership costs.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. Three of these reports highlight the latest work of SEI technologists on insider threat in international contexts, unintentional insider threats, and attributes and mitigation strategies. The last reportprovides the results of several exploratory research initiatives conducted by SEI staff in fiscal year 2012. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

In our work with the Department of Defense (DoD) and other government agencies such as the U.S. Department of Veteran Affairs and the U.S. Department of the Treasury, we often encounter organizations that have been asked by their government program office to adopt agile methods. These are organizations that have traditionally utilized a "waterfall" life cycle model (as epitomized by the engineering "V" charts) and are accustomed to being managed via a series of document-centric technical reviews that focus on the evolution of the artifacts that describe the requirements and design of the system rather than its evolving implementation, as is more common with agile methods.

Software is the principal, enabling means for delivering system and warfighter performance across a spectrum of Department of Defense (DoD) capabilities. These capabilities span the spectrum of mission-essential business systems to mission-critical command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) systems to complex weapon systems. Many of these systems now operate interdependently in a complex net-centric and cyber environment. The pace of technological change continues to evolve along with the almost total system reliance on software. This blog posting examines the various challenges that the DoD faces in implementing software assurance and suggests strategies for an enterprise-wide approach.

From the braking system in your automobile to the software that controls the aircraft that you fly in, safety-critical systems are ubiquitous. Showing that such systems meet their safety requirements has become a critical area of work for software and systems engineers. "We live in a world in which our safety depends on software-intensive systems," editors of IEEE Software wrote in the magazine's May/June issue. "Organizations everywhere are struggling to find cost-effective methods to deal with the enormous increase in size and complexity of these systems, while simultaneously respecting the need to ensure their safety." The Carnegie Mellon Software Engineering Institute (SEI) is addressing this issue with a significant research program into assurance cases. Our sponsors are regularly faced with assuring that complex software-based systems meet certain kinds of requirements such as safety, security, and reliability. In this post, the first in a series on assurance cases and confidence, I will introduce the concept of assurance cases and show how they can be used to argue that a safety requirement (or other requirement such as security) has been met.

Researchers on the CERT Division's insider threat team have presented several of the 26 patterns identified by analyzing our insider threat database, which is based on examinations of more than 700 insider threat cases and interviews with the United States Secret Service, victims' organizations, and convicted felons. Through our analysis, we identified more than 100 categories of weaknesses in systems, processes, people, or technologies that allowed insider threats to occur. One aspect of our research focuses on identifying enterprise architecture patterns that organizations can use to protect their systems from malicious insider threat. Now that we've developed 26 patterns, our next priority is to assemble these patterns into a pattern language that organizations can use to bolster their resources and make them more resilient against insider threats. This blog post is the third installment in a seriesthat describes our research to create and validate an insider threat mitigation pattern language to help organizations balance the cost of security controls with the risk of insider compromise.

In 2012, Symantec blocked more than 5.5 billion malware attacks (an 81 percent increase over 2010) and reported a 41 percent increase in new variants of malware, according to January 2013 Computer World article. To prevent detection and delay analysis, malware authors often obfuscate their malicious programs with anti-analysis measures. Obfuscated binary code prevents analysts from developing timely, actionable insights by increasing code complexity and reducing the effectiveness of existing tools. This blog post describes research we are conducting at the SEI to improve manual and automated analysis of common code obfuscation techniquesused in malware.

When life- and safety-critical systems fail, the results can be dire, including loss of property and life. These types of systems are increasingly prevalent, and can be found in the altitude and control systems of a satellite, the software-reliant systems of a car (such as its cruise control and GPS), or a medical device. When developing such systems, software and systems architects must balance the need for stability and safety with stakeholder demands and time-to-market constraints. The Architectural Analysis & Design Language (AADL) helps software and system architects address the challenges of designing life- and safety-critical systems by providing a modeling notation that employs textual and graphic representations. This blog posting, part of an ongoing series on AADL, describes how AADL is being used in medical devices and highlights the experiences of a practitioner whose research aims to address problems with medical infusion pumps.

Soldiers and emergency workers who carry smartphones in the battlefield, or into disaster recovery sites (such as Boston following the marathon bombing earlier this year) often encounter environments characterized by high mobility, rapidly-changing mission requirements, limited computing resources, high levels of stress, and limited network connectivity. At the SEI, we refer to these situations as "edge environments." Along with my colleagues in the SEI's Advanced Mobile Systems Initiative, my research aims to increase the computing power of mobile devices in edge environments where resources are scarce. One area of my work has focused on leveraging cloud computing so users can extend the capabilities of their mobile devices by offloading expensive computations to more powerful computing resources in a cloud. Some drawbacks to offloading computation to the cloud in resource-constrained environments remain, however, including latency (which can be exacerbated by the distance between mobile devices and clouds) and limited internet access (which makes traditional cloud computing unfeasible). This blog post is the latest in a seriesthat describes research aimed at exploring the applicability of application virtualization as a strategy for cyber-foraging in resource-constrained environments.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in acquisition, socio-adaptive systems, application virtualization, insider threat, software assurance, and the Personal Software Process (PSP). This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Risk inherent in any military, government, or industry network system cannot be completely eliminated, but it can be reduced by implementing certain network controls. These controls include administrative, management, technical, or legal methods. Decisions about what controls to implement often rely on computed-risk models that mathematically calculate the amount of risk inherent in a given network configuration. These computed-risk models, however, may not calculate risk levels that human decision makers actually perceive.

I recently joined the Carnegie Mellon Software Engineering Institute (SEI) as deputy director and chief technology officer (CTO). My goal in this new role is to help the SEI advance computer science, software engineering, cybersecurity, and related disciplines to help ensure that the acquisition, development, and operation of software-dependent systems have lower cost, higher quality, and better security. I have spent the past two decades conducting a range of research and development activities, and I have served on various Department of Defense (DoD) advisory boards. In this blog posting, I'd like to talk a little bit about my background and outline the priorities I'm pursuing at the SEI. In subsequent blog postings, I'll describe the SEI technical strategy in more detail.

Warfighters in a tactical environment face many constraints on computational resources, such as the computing power, memory, bandwidth, and battery power. They often have to make rapid decisions in hostile environments. Many warfighters can access situational awareness data feeds on their smartphones to make critical decisions. To access these feeds, however, warfighters must contend with an overwhelming amount of information from multiple, fragmented data sources that cannot be easily combined on a small smartphone screen. The same resource constraints apply to emergency responders involved in search-and-rescue missions, who often must coordinate their efforts with multiple responders. This posting describes our efforts to create the Edge Mission-Oriented Tactical App Generator (eMontage), a software prototype that allows warfighters and first responders to rapidly integrate geotagged situational awareness data from multiple remote data sources.

Aircraft and other safety-critical systems increasingly rely on software to provide their functionality. The exponential growth of software in safety-critical systems has pushed the cost for building aircraft to the limit of affordability. Given this increase, the current practice of build-then-test is no longer feasible. This blog posting describes recent work at the SEI to improve the quality of software-reliant systems through an approach known as the Reliability Validation and Improvement Frameworkthat will lead to early defect discovery and incremental end-to-end validation.

The ubiquity of mobile devices provides new opportunities to warn people of emergencies and imminent threats using location-aware technologies. The Wireless Emergency Alerts (WEA) system, formerly known as the Commercial Mobile Alert Service (CMAS), is the newest addition to the Federal Emergency Management Agency (FEMA) Integrated Public Alert and Warning System (IPAWS), which allows authorities to broadcast emergency alerts to cell phone customers with WEA-enabled devices in an area affected by a disaster or a major emergency. This blog posting describes how the Software Engineering Institute's (SEI) work on architecture, integration, network security, and project management is assisting in implementing the WEA system, so it can handle a large number of alert originators and provide an effective nationwide wireless emergency warning system.

Building a complex weapon system in today's environment may involve many subsystems--propulsion, hydraulics, power, controls, radar, structures, navigation, computers, and communications. Design of these systems requires the expertise of engineers in particular disciplines, including mechanical engineering, electrical engineering, software engineering, metallurgical engineering, and many others. But some activities of system development are interdisciplinary, including requirements development, trade studies, and architecture design, to name a few. These tasks do not fit neatly into the traditional engineering disciplines, and require the attention of engineering staff with broader skills and backgrounds. This need for breadth and experience is often met by systems engineers. Unfortunately, system engineering is often not valued among all stakeholders in the Department of Defense (DoD), and is often the first group of activities to be eliminated when a program is faced with budget constraints. This blog post highlights recent researchaimed at demonstrating the value of systems engineering to program managers in the DoD and elsewhere.

In the first blog entry of this two part series on common testing problems, I addressed the fact that testing is less effective, less efficient, and more expensive than it should be. This second posting of a two-part serieshighlights results of an analysis that documents problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories; lists potential symptoms by which each can be recognized; potential negative consequences, and potential causes; and makes recommendations for preventing them or mitigating their effects.

Software and systems architects face many challenges when designing life- and safety-critical systems, such as the altitude and control systems of a satellite, the auto pilot system of a car, or the injection system of a medical infusion pump. Architects in software and systems answer to an expanding group of stakeholders and often must balance the need to design a stable system with time-to-market constraints. Moreover, no matter what programming language architects choose, they cannot design a complete system without an appropriate tool environment that targets user requirements. A promising tool environment is the Architecture Analysis and Design Language (AADL), which is a modeling notation that employs both textual and graphical representations. This post, the second in a series on AADL, provides an overview of existing AADL tools and highlights the experience of researchers and practitioners who are developing and applying AADL tools to production projects.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in quantifying expert judgment, insider threat, detecting and preventing data exfiltration, and developing a common vocabulary for malware analysts. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

In 2009, a popular blogger published a post entitled "SOA is Dead," which generated extensive commentary among those who work in the field of service-oriented architecture (SOA). Many practitioners in this field completely misinterpreted the post; some read the title and just assumed that the content referenced the demise of SOA. Quite the opposite, the post was inviting people to stop thinking about SOA as a set of technologies and start embracing SOA as an approach for designing, developing, and managing distributed systems that goes beyond just the technology. Unfortunately, even though SOA is still alive and widely adopted, a belief still persists that SOA can be purchased off the shelf. This post highlights recent research aimed at clarifying this misperception for architects, as well as identifying the elements that constitute a service-oriented system and the relationships between these elements.

In 2009, a popular blogger published a post entitled "SOA is Dead," which generated extensive commentary among those who work in the field of service-oriented architecture (SOA). Many practitioners in this field completely misinterpreted the post; some read the title and just assumed that the content referenced the demise of SOA. Quite the opposite, the post was inviting people to stop thinking about SOA as a set of technologies and start embracing SOA as an approach for designing, developing, and managing distributed systems that goes beyond just the technology.

A widely cited study for the National Institute of Standards & Technology (NIST) reports that inadequate testing methods and tools annually cost the U.S. economy between $22.2 and $59.5 billion, with roughly half of these costs borne by software developers in the form of extra testing and half by software users in the form of failure avoidance and mitigation efforts. The same study notes that between 25 and 90 percent of software development budgets are often spent on testing. This posting, the first in a two-part series, highlights results of an analysis that documents problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories, lists potential symptoms by which each can be recognized, potential negative consequences, potential causes, and makes recommendations for preventing them or mitigating their effects.

In launching the SEI blog two years ago, one of our top priorities was to advance the scope and impact of SEI research and development projects, while increasing the visibility of the work by SEI technologists who staff these projects. After 114 posts, and 72,608 visits from readers of our blog, this post reflects on some highlights from the last two years and gives our readers a preview of posts to come.

This blog post describes a research initiative aimed at eliminating vulnerabilities resulting from memory management problems in C and C++. Memory problems in C and C++ can lead to serious software vulnerabilities including difficulty fixing bugs, performance impediments, program crashes (including null pointer deference and out-of-memory errors), and remote code execution.

The adoption of new practices, such as agile or any new practice for that matter, is a task that is best undertaken with both eyes open. There are often disconnects between the adopting organization's current practice and culture and the new practices being adopted. This posting is the second installment in a series on Readiness & Fit Analysis (RFA), which is a model and method for understanding risks when contemplating or embarking on the adoption of new practices, in this case agile methods.

When a system fails, engineers too often focus on the physical components, but pay scant attention to the software. In software-reliant systems ignoring or deemphasizing the importance of software failures can be a recipe for disaster. This blog post is the first in a series on recent developments with the Architecture Analysis Design Language (AADL) standard. Future posts will explore recent tools and projects associated with AADL, which provides formal modeling concepts for the description and analysis of application systems architecture in terms of distinct components and their interactions. As this series will demonstrate, the use of AADL helps alleviate mismatched assumptions between the hardware, software, and their interactions that can lead to system failures.

In 2011, Col. Timothy Hill, director of the Futures Directorate within the Army Intelligence and Security Command, urged industry to take a more open-standards approach to cloud computing. "Interoperability between clouds, as well as the portability of files from one cloud to another, has been a sticking point in general adoption of cloud computing," Hill said during a panel at the AFCEA International 2011 Joint Warfighting Conference. Hill's view has been echoed by many in the cloud computing community, who believe that the absence of interoperability has become a barrier to adoption. This posting reports on recent researchexploring the role of standards in cloud computing and offers recommendations for future standardization efforts.

Some of the principal challenges faced by developers, managers, and researchers in software engineering and cybersecurity involve measurement and evaluation. In two previous blog posts, I summarized some features of the overall SEI Technology Strategy. This post focuses on how the SEI measures and evaluates its research program to help ensure these activities address the most significant and pervasive problems for the Department of Defense (DoD). Our goal is to conduct projects that are technically challenging and whose solution will make a significant difference in the development and operation of software-reliant systems. In this post we'll describe the process used to measure and evaluate the progress of initiated projects at the SEI to help maximum their potential for success.

Knowing what assets are on a network, particularly which assets are visible to outsiders, is an important step in achieving network situational awareness. This awareness is particularly important for large, enterprise-class networks, such as those of telephone, mobile, and internet providers. These providers find it hard to track hosts, servers, data sets, and other vulnerable assets in the network.

As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in and systems engineering, resilience, and insider threat. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

The Department of Defense (DoD) has become deeply reliant on software. As a federally funded research and development center (FFRDC), the SEI is chartered to work with the DoD to meet the challenges of designing, producing, assuring, and evolving software-reliant systems in an affordable and dependable manner. This blog post is the second in a multi-part series that describes key elements of our forthcoming Strategic Research Plan that address these challenges through research, acquisition support, and collaboration with the DoD, other federal agencies, industry, and academia.

An autonomous system is a computational system that performs a desired task, often without human guidance. We use varying degrees of autonomy in robotic systems for manufacturing, exploration of planets and space debris, water treatment, ambient sensing, and even cleaning floors. This blog post discusses practical autonomous systems that we are actively developing at the SEI. Specifically, this post focuses on a new research effort at the SEI called Self-governing Mobile Adhocs with Sensors and Handhelds (SMASH) that is forging collaborations with researchers, professors, and students with the goal of enabling more effective search-and-rescue crews.

It's undeniable that the field of software architecture has grown during the past 20 years. In 2010, CNN/Money magazine identified "software architect" as the most desirable job in the U.S. Since 2004, the SEI has trained people from more than 900 organizations in the principles and practices of software architecture, and more than 1,800 people have earned the SEI Software Architecture Professional certificate. It is widely recognized today that architecture serves as the blueprint for both the system and the project developing it, defining the work assignments that must be performed by design and implementation teams. Architecture is the primary purveyor of system quality attributes, which are hard to achieve without a unifying architecture; it's also the conceptual glue that holds every phase of projects together for their many stakeholders. This blog posting--the final installment in a series--provides lightly edited transcriptions of presentations by Jeromy Carriere and Ian Gorton at a SATURN 2012 roundtable, "Reflections on 20 Years of Software Architecture."

The Department of Defense (DoD) has become deeply and fundamentally reliant on software. As a federally funded research and development center (FFRDC), the SEI is chartered to work with the DoD to meet the challenges of designing, producing, assuring, and evolving software-reliant systems in an affordable and dependable manner. This blog post--the first in a multi-part series--outlines key elements of the forthcoming SEI Strategic Research Plan that addresses these challenges through research and acquisition support and collaboration with DoD, other federal agencies, industry, and academia.