Archive: 2015-01

The federal government is facing a shortage of cybersecurity professionals that puts our national security at risk, according to recent research. "As cyber attacks have increased and there is increased awareness of vulnerabilities, there is more demand for the professionals who can stop such attacks. But educating, recruiting, training and hiring these cybersecurity professionals takes time," the research states. Recognizing these realities, the U. S. Department of Homeland Security (DHS) National Cyber Security Division (NCSD) enlisted the resources of the Software Engineering Institute (SEI) to develop a curriculum for a Master of Software Assurance degree program and define transition strategies for implementing it. This blog post presents an overview of the Master of Software Assurance curriculum project, including its history, student prerequisites and outcomes, a core body of knowledge, and a curriculum architecture from which to create such a degree program.

Malicious attackers and penetration testers can use some of the same tools. Attackers use them to cause harm while penetration testers use them to bring value to organizations. In this blog post, I've partnered with colleagues Jason Frank and Will Schroeder from The Veris Group's Adaptive Threat Division to describe some of the common penetration testing tools and techniques that can greatly benefit network defenders. While this blog post cannot cover all the techniques and shortcuts we use in the field, we do describe a set of 10 tactics that provide very little network disruption, are easy to use, and freely available.

It's the holiday season, a traditionally busy time for many data centers as online shopping surges and many of the staff take vacations. When you see abnormal traffic patterns and overall volume starts to rise, what is the best way to determine the cause? People could be drawn to your business, and you will soon need to add surge capacity, or maybe you are in the beginnings of a denial-of-service attack and need to contact your service provider. This blog post highlights recent work by CERT researchers to help organizations gain cyber situational awareness, which is based on network flow, and provides a tool to gain invaluable insights into ways your network is being used. More importantly, it helps you decide how to respond to changes in the online environment.

At an open architecture summit in November 2014, Katrina G. McFarland, assistant secretary of defense for acquisition said that 75 percent of all Defense Department acquisition strategies implement open systems architecture across all services and agencies. "This department is seriously engaged in trying to understand how to help our program managers and our department and our industry look at open architecture and its benefits," McFarland said, "and understand truly what our objectives are related to intellectual property and making sure that we're doing it based on the best interest of national security relative to a business case." Open systems architecture (OSA) integrates business and technical practices to create systems with interoperable and reusable components. OSA offers outstanding potential for creating resilient and adaptable systems and is therefore a priority for the DoD. The challenges with OSA, however, make it one of the most ambitious endeavors in software architecture today. A group of researchers at the SEI recently held an informal roundtable with David Sharp, a senior technical fellow at The Boeing Company and an expert in software architecture for embedded systems and systems of systems, to discuss OSA-based approaches and how best to help the DoD achieve them. This blog post presents highlights of the discussion with Sharp on OSA approaches and how they can best be integrated in DoD system development.

Many systems and platforms, from unmanned aerial vehicles to minivans and smartphones, are realizing the promise of Open Systems Architecture (OSA). A core tenet of OSA is the broad availability of standards and designs, the sharing of information between developers, and in some cases downloadable tool kits. In return for openness, a broader community of potential developers and applications emerges, which in turn increases adoption and use. Consequently, there is a trade-off. Openness is a two way street, allowing devious opportunities for cyber intrusion and attack and less-than-ideal code to enter the system (because of the mechanisms of OSA). This blog post briefly examines the potentials, good and bad, of OSA and reviews four best practices for open source ecosystems.

According to the National Institute of Standards and Technology (NIST), Information Security Continuous Monitoring (ISCM) is a process for continuously analyzing, reporting, and responding to risks to operational resilience (in an automated manner, whenever possible). Compared to the traditional method of collecting and assessing risks at longer intervals--for instance, monthly or annually--ISCM promises to provide near-real-time situational awareness of an organization's risk profile. ISCM creates challenges as well as benefits, however, because the velocity of information gathered using ISCM is drastically increased. Development, operation, and maintenance processes built for a more leisurely pace can thus be overwhelmed. This blog post explores how organizations can leverage Agile methods to keep pace with the increased velocity of ISCM risk information, while ensuring that changes to systems are conducted in a controlled manner.

In my preceding blog posts, I promised to provide more examples highlighting the importance of software sustainment in the U.S. Department of Defense (DoD). My focus is on sustaining legacy weapons systems that are no longer in production, but are expected to remain a key component of our defense capability for decades to come. Despite the fact that these legacy systems are no longer in the acquisition phase, software upgrade cycles are needed to refresh their capabilities every 18 to 24 months. In addition, significant modernization can often be made by more extensive, focused software upgrades with relatively modest hardware changes. This third blog post describes effective sustainment engineering efforts in the Army, using examples from across its software engineering centers. These examples are tied to SEI research on capability maturity models, agility, and the Architecture Analysis and Design Language (AADL) modeling notation.

This is the first post in a three-part series.

Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management. In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget.

In an effort to offer our assessment of recommended techniques in these areas, SEI built researchers built upon an existing collaborative online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment), hosted on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. From June 2013 to June 2014, the SEI assembled guidance on a variety of topics based on relevance, maturity of the practices described, and the timeliness with respect to current events. For example, shortly after the Target security breach of late 2013, we selected Managing Operational Resilience as a topic.

Ultimately, SEI curated recommended practices on five software topics: Agile at Scale, Safety-Critical Systems, Monitoring Software-Intensive System Acquisition Programs, Managing Intellectual Property in the Acquisition of Software-Intensive Systems, and Managing Operational Resilience. In addition to a recently published paper on SEI efforts and individual posts on the SPRUCE site, these recommended practices will be published in a series of posts on the SEI blog. This post, the first in a three-part series by Robert Ferguson, first explores the challenges to Monitoring Software-Intensive System Acquisition (SISA) programs and presents the first two recommended best practices as detailed in the SPRUCE post. The second post in this series will present the next three best practices. The final post will present the final two recommendations as well as conditions that will allow organizations to derive the most benefit from these practices.

By Donald Firesmith
Principal Engineer
Software Solutions Division

Due to advances in hardware and software technologies, Department of Defense (DoD) systems today are highly capable and complex. However, they also face increasing scale, computation, and security challenges. Compounding these challenges, DoD systems were historically designed using stove-piped architectures that lock the Government into a small number of system integrators, each devising proprietary point solutions that are expensive to develop and sustain over the lifecycle. Although these stove-piped solutions have been problematic (and unsustainable) for years, the budget cuts occurring under sequestration are motivating the DoD to reinvigorate its focus on identifying alternative means to drive down costs, create more affordable acquisition choices, and improve acquisition program performance. A promising approach to meet these goals is Open Systems Architecture (OSA), which combines

This blog posting expands on earlier coverage of how acquisition professionals and system integrators can apply OSA practices to effectively decompose large monolithic business and technical architectures into manageable and modular solutions that can integrate innovation more rapidly and lower total ownership costs.

By Douglas Gray
Information Security Engineer
CERT Division

In leveraging threat intelligence, the operational resilience practitioner need not create a competing process independent of other frameworks the organization is leveraging. In fact, the use of intelligence products in managing operational resilience is not only compatible with many existing frameworks but is, in many cases, inherent. While it is beyond the scope of this blog to provide an in-depth discussion of some of the more widely used operational resilience measurement and decision-making best practices--including the CERT® Resilience Management Model (CERT-RMM), Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) Allegro methodology, the NIST Risk Management Framework (RMF), Agile, and the Project Management Body of Knowledge (PMBOK)-- this blog post, the second in a series, provides a discussion of how to operationalize intelligence products to build operational resilience of organizational assets and services.

By David Svoboda
Senior Member of the Technical Staff
CERT Division

Whether Java is more secure than C is a simple question to ask, but a hard question to answer well. When we began writing the SEI CERT Oracle Coding Standard for Java, we thought that Java would require fewer secure coding rules than the SEI CERT C Coding Standard because Java was designed with security in mind. We naively assumed that a more secure language would need fewer rules than a less secure one. However, Java has 168 coding rules compared to just 116 for C. Why? Was our (admittedly simplistic) assumption completely spurious? Or, are there problems with our C or Java rules? Or, are Java programs, on average, just as susceptible to vulnerabilities as C programs? In this post, I attempt to analyze our CERT rules for both C and Java to determine if they indeed refute the conventional wisdom that Java is more secure than C.

By Douglas Gray
Information Security Engineer
CERT Division

What differentiates cybersecurity from other domains in information technology (IT)? Cybersecurity must account for an adversary. It is the intentions, capabilities, prevailing attack patterns of these adversaries that form the basis of risk management and the development of requirements for cybersecurity programs. In this blog post, the first in a series, I present strategies for enabling resilience practitioners to organize and articulate their intelligence needs, as well as relevant organizational information, establish a collaborative relationship with their intelligence providers, organize and assess intelligence, and act upon intelligence via frameworks such as the CERT® Resilience Management Model (CERT-RMM), Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) Allegro methodology, the NIST Risk Management Framework, Agile, and the Project Management Body of Knowledge (PMBOK). Subsequent postings in this blog series, we discuss how these common resilience, risk, and project-management frameworks can be leveraged to integrate threat intelligence into improving the operational resilience of organizations.

By Donald Firesmith
Principal Engineer
Software Solutions Division

There are more than 200 different types of testing, and many stakeholders in testing--including the testers themselves and test managers--are often largely unaware of them or do not know how to perform them. Similarly, test planning frequently overlooks important types of testing. The primary goal of this series of blog posts is to raise awareness of the large number of test types, to verify adequate completeness of test planning, and to better guide the testing process. In the previous blog entry in this series, I introduced a taxonomy of testing in which 15 subtypes of testing were organized around how they addressed the classic 5W+2H questions: what, when, why, who, where, how, and how well. This and future postings in this series will cover each of these seven categories of testing, thereby providing structure to roughly 200 types of testing currently used to test software-reliant systems and software applications.

This blog entry covers the four, top-level subtypes of testing that answer the following questions:

  • What-based testing: What is being tested?
    • Object-Under-Test-based (OUT) testing
    • Domain-based testing
    • When-based testing: When is the testing being performed?
      • Order-based testing
      • Lifecycle-based testing
      • Phase-based testing
      • Built-in-Test (BIT) testing

After exploring the what-based and when-based categories of testing, this post presents a section on using the taxonomy, as well as opportunities for accessing it.

By Julien Delange
Member of the Technical Staff
Software Solutions Division

For decades, safety-critical systems have become more software intensive in every domain--in avionics, aerospace, automobiles, and medicine. Software acquisition is now one of the biggest production costs for safety-critical systems. These systems are made up of several software and hardware components, executed on different components, and interconnected using various buses and protocols. For instance, cars are now equipped with more than 70 electronic control units (ECUs) interconnected with different buses and require about 100 million source lines of code (SLOC) to provide driver assistance, entertainment systems, and all necessary safety features, etc. This blog post discusses the impact of complexity in software models and presents our tool that produces complexity metrics from software models.

By Douglas C. Schmidt
Principal Researcher

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports, technical notes, and white papers. These reports highlight the latest work of SEI technologists in Agile software development and Agile-at-scale, software architecture fault analysis, computer network design, confidence in system properties, and system-of-systems development as well as commentary from two CERT researchers on the Proposed BIS Wassenaar Rule. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

By Donald Firesmith
Principal Engineer
Software Solutions Division

While evaluating the test programs of numerous defense contractors, we have often observed that they are quite incomplete. For example, they typically fail to address all the relevant types of testing that should be used to (1) uncover defects (2) provide evidence concerning the quality and maturity of the system or software under test, and (3) demonstrate the readiness of the system or software for acceptance and being placed into operation. Instead, many test programs only address a relatively small subset of the total number of potentially relevant types of testing, such as unit testing, integration testing, system testing, and acceptance testing. In some cases, the missing testing types are actually performed (to some extent) but not addressed in test-related planning documents, such as test strategies, system and software test plans (STPs), and the testing sections of systems engineering management plans (SEMPs) and software development plans (SDP). In many cases, however, they are neither mentioned nor performed. This blog, post, the first in a series on the many types of testing, examines the negative consequences of not addressing all relevant testing types and introduces a taxonomy of testing types to help testing stakeholders understand--rather than overlook--them.

By Kevin Fall
Deputy Director, Research, and CTO

This is the second installment in a series on the SEI's technical strategic plan.

Department of Defense (DoD) systems are becoming increasingly software reliant, at a time when concerns about cybersecurity are at an all-time high. Consequently, the DoD, and the government more broadly, is expending significantly more time, effort, and money in creating, securing, and maintaining software-reliant systems and networks. Our first post in this series provided an overview of the SEI's five-year technical strategic plan, which aims to equip the government with the best combination of thinking, technology, and methods to address its software and cybersecurity challenges. This blog post, the second in the series, looks at ongoing and new research we are undertaking to address key cybersecurity, software engineering and related acquisition issues faced by the government and DoD.

Object-oriented programs present considerable challenges to reverse engineers. For example, C++ classes are high-level structures that lead to complex arrangements of assembly instructions when compiled. These complexities are exacerbated for malware analysts because malware rarely has source code available; thus, analysts must grapple with sophisticated data structures exclusively at the machine code level. As more and more object-oriented malware is written in C++, analysts are increasingly faced with the challenges of reverse engineering C++ data structures. This blog post is the first in a series that discusses tools developed by the Software Engineering Institute's CERT Division to support reverse engineering and malware analysis tasks on object-oriented C++ programs.

This is the second installment of two blog posts highlighting recommended practices for achieving Agile at Scale that was originally published on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. The first post in the series by Ipek Ozkaya and Robert Nord explored challenges to achieving Agile at Scale and presented the first five recommended practices:


1. Team coordination
2. Architectural runway
3. Align development and decomposition.
4. Quality-attribute scenarios
5. Test-driven development

This post presents the remaining five technical best practices, as well as three conditions that will help organizations achieve the most value from these recommended practices. This post was originally published in its entirety on the SPRUCE website.

We are writing to let our SEI Blog readers know about some changes to SEI blogs that make our content areas more accessible and easier to navigate. On August 6, 2015, the SEI will unveil a new website, SEI Insights, that will give you access to all SEI blogs--the CERT/CC, Insider Threat, DevOps and SATURN, and SEI--in one mobile-friendly location. At SEI Insights, readers can quickly review the most recent posts from all SEI blogs and navigate to each blog.

The biweekly DevOps series that was part of the SEI Blog will now have its own blog page accessible from the Insights homepage. The SEI and DevOps blogs, as well as the other blogs on the site, will maintain individual RSS feeds.

To access SEI Insights, please visit http://insights.sei.cmu.edu/.

Your links to the former blog sites will temporarily redirect to SEI Insights, but we encourage you to update existing links and bookmarks.

Thank you for your continued support of the SEI blogs.

This post is the first in a two-part series highlighting 10 recommended practices for achieving agile at scale.

Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management. In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget.

In their haste to deliver software capabilities, developers sometimes engage in less-than-optimal coding practices. If not addressed, these shortcuts can ultimately yield unexpected rework costs that offset the benefits of rapid delivery. Technical debt conceptualizes the tradeoff between the short-term benefits of rapid delivery and long-term value. Taking shortcuts to expedite the delivery of features in the short term incurs technical debt, analogous to financial debt, that must be paid off later to optimize long-term success. Managing technical debt is an increasingly critical aspect of producing cost-effective, timely, and high-quality software products, especially in projects that apply agile methods.

In their current state, wearable computing devices, such as glasses, watches, or sensors embedded into your clothing, are obtrusive. Jason Hong, associate professor of computer science at Carnegie Mellon University, wrote in a 2014 co-authored article in Pervasive Computing that while wearables gather input from sensors placed optimally on our bodies, they can also be "harder to accommodate due to our social context and requirements to keep them small and lightweight."

The SEI Blog continues to attract an ever-increasing number of readers interested in learning more about our work in agile metrics, high-performance computing, malware analysis, testing, and other topics. As we reach the mid-year point, this blog posting highlights our 10 most popular posts, and links to additional related resources you might find of interest (Many of our posts cover related research areas, so we grouped them together for ease of reference.)

Before we take a deeper dive into the posts, let's take a look at the top 10 posts (ordered by number of visits, with #1 being the highest number of visits):

This is the second installment of two blog posts highlighting recommended practices for developing safety-critical systems that was originally published on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. The first postin the series by Peter Feiler, Julien Delange, and Charles Weinstock explored challenges to developing safety critical systems and presented the first three practices:

  1. Use quality attribute scenarios and mission-tread analyses to identify safety-critical requirements.
  2. Specify safety-critical requirements, and prioritize them.
  3. Conduct hazard and static analyses to guide architectural and design decisions.

Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management. In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget.

Using the Architecture Analysis & Design Language (AADL) modeling notation early in the development process not only helps the development team detect design errors before implementation, but also supports implementation efforts and produces high-quality code. Our recent blog posts and webinar have shown how AADL can identify potential design errors and help avoid propagating them through the development process, where remediation can require massive re-engineering, delay the schedule, and increase costs.

This post is the first in a series introducing our research into software and system complexity and its impact in avionics.

On July 6, 2013, an Asiana Airlines Boeing 777 airplane flying from Seoul, South Korea, crashed on final approach into San Francisco International airport. While 304 of the 307 passengers and crew members on board survived, almost 200 were injured (10 critically) and three young women died. The National Transportation Safety Board (NTSB) blamed the crash on the pilots, but also said "the complexity of the Boeing 777's auto throttle and auto flight director--two of the plane's key systems for controlling flight--contributed to the accident."

Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management. In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget.

Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management. In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget.

In 2010, the Office of Management and Budget (OMB) issued a 25-point plan to reform IT that called on federal agencies to employ "shorter delivery time frames, an approach consistent with Agile" when developing or acquiring IT. OMB data suggested Agile practices could help federal agencies and other organizations design and acquire software more effectively, but agencies needed to understand the risks involved in adopting these practices.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in governing operational resilience, model-driven engineering, software quality, Android app analysis, software architecture, and emerging technologies. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Software is a growing component of systems used by Department of Defense (DoD), government, and industry organizations. As organizations become more dependent on software, security-related risks to their organizational missions are also increasing. Despite this rise in security risk exposure, most organizations follow a familiar pattern when managing those risks.

Legacy systems represent a massive operations and maintenance (O&M) expense. According to a recent study, 75 percent of North American and European enterprise information technology (IT) budgets are expended on ongoing O&M, leaving a mere 25 percent for new investments. Another study found nearly three quarters of the U.S. federal IT budget is spent supporting legacy systems. For decades, the Department of Defense (DoD) has been attempting to modernize about 2,200 business systems, which are supported by billions of dollars in annual expenditures that are intended to support business functions and operations.

This post was co-authored by Bill Nichols.


Mitre's Top 25 Most Dangerous Software Errors is a list that details quality problems, as well as security problems. This list aims to help software developers "prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped." These vulnerabilities often result in software that does not function as intended, presenting an opportunity for attackers to compromise a system.

For two consecutive years, organizations reported that insider crimes caused comparable damage (34 percent) to external attacks (31 percent), according to a recent cybercrime report co-sponsored by the CERT Division at the Carnegie Mellon University Software Engineering Institute. Despite this near parity, media reports of attacks often focus on external attacks and their aftermath, yet an attack can be equally or even more devastating when carried out from within an organization. Insider threats are influenced by a combination of technical, behavioral, and organizational issues and must be addressed by policies, procedures, and technologies. Researchers at the CERT Insider Threat Center define insider threat as actions by an individual who meets the following criteria:

In 2014, approximately 1 billion records of personably identifiable information were compromised as a result of cybersecurity vulnerabilities. In the face of this onslaught of compromises, it is important to examine fundamental insecurities that CERT researchers have identified and that readers of the CERT/CC bloghave found compelling. This post, the first in a series highlighting CERT resources available to the public including blogs and vulnerability notes, focuses on the CERT/CC blog. This blog post highlights security vulnerability and network security resources to help organizations in government and industry protect against breaches that compromise data.

In Department of Defense (DoD) programs, cooperation among software and system components is critical. A system of systems (SoS) is used to accomplish a number of missions where cooperation among individual systems is critical to providing (new) capabilities that the systems could not provide. SoS capabilities are a major driver in the architecture of the SoS and selection of constituent systems for the SoS. There are additional critical drivers, however, that must be accounted for in the architecture that significantly impact the behavior of the SoS capabilities, as well as the development and sustainment of the SoS and its constituent systems' architectures. These additional drivers are the quality attributes, such as performance, availability, scalability, security, usability, testability, safety, training, reusability, interoperability, and maintainability. This blog post, the first in a series, introduces the Mission Thread Workshop (MTW), and describes the role that it plays in assisting SoS programs to elicit and refine end-to-end SoS mission threads augmented with quality attribute considerations.

One of the most important and widely discussed trends within the software testing community is shift left testing, which simply means beginning testing as early as practical in the lifecycle. What is less widely known, both inside and outside the testing community, is that testers can employ four fundamentally-different approaches to shift testing to the left. Unfortunately, different people commonly use the generic term shift left to mean different approaches, which can lead to serious misunderstandings. This blog post explains the importance of shift left testing and defines each of these four approaches using variants of the classic V model to illustrate them.

This blog post was co-authored by Will Klieber.

Each software application installed on a mobile smartphone, whether a new app or an update, can introduce new, unintentional vulnerabilities or malicious code. These problems can lead to security challenges for organizations whose staff uses mobile phones for work. In April 2014, we published a blog post highlighting DidFail (Droid Intent Data Flow Analysis for Information Leakage), which is a static analysis tool for Android app sets that addresses data privacy and security issues faced by both individual smartphone users and organizations. This post highlights enhancements made to DidFail in late 2014 and an enterprise-level approach for using the tool.

As recent news headlines about Shellshock, Sony, Anthem, and Target have demonstrated, software vulnerabilities are on the rise. The U.S. General Accounting Office in 2013 reported that "operational vulnerabilities have increased 780 percent over the past six years." These vulnerabilities can be hard and expensive to eradicate, especially if introduced during the design phase. One issue is that design defects exist at a deeper architectural level and thus can be hard to find and address. Although coding-related vulnerabilities are preventable and detectable, until recently scant attention has been paid to vulnerabilities arising from requirements and design defects.

Mismatched assumptions about hardware, software, and their interactions often result in system problems detected too late in the development lifecycle, which is an expensive and potentially dangerous situation for developers and users of mission- and safety-critical technologies. To address this problem, the Society of Automotive Engineers (SAE) released the aerospace standard AS5506, named the Architecture Analysis & Design Language (AADL). The AADL standard,defines a modeling notation based on a textual and graphic representation used by development organizations to conduct lightweight, rigorous--yet comparatively inexpensive--analyses of critical real-time factors, such as performance, dependability, security, and data integrity.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in resilience, metrics, sustainment, and software assurance. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

The Department of Defense (DoD) and other government agencies increasingly rely on software and networked software systems. As one of over 40 federally funded research and development centers sponsored by the United States government, Carnegie Mellon University's Software Engineering Institute (SEI) is working to help the government acquire, design, produce, and evolve software-reliant systems in an affordable and secure manner. The quality, safety, reliability, and security of software and the cyberspace it creates are major concerns for both embedded systems and enterprise systems employed for information processing tasks in health care, homeland security, intelligence, logistics, etc. Cybersecurity risks, a primary focus area of the SEI's CERT Division, regularly appear in news media and have resulted in policy action at the highest levels of the US government (See Report to the President: Immediate Opportunities for Strengthening the Nation's Cybersecurity ).

This blog post was co-authored by Eric Werner.

Graph algorithms are in wide use in Department of Defense (DoD) software applications, including intelligence analysis, autonomous systems, cyber intelligence and security, and logistics optimizations. In late 2013, several luminaries from the graph analytics community released a position paper calling for an open effort, now referred to as GraphBLAS, to define a standard for graph algorithms in terms of linear algebraic operations. BLAS stands for Basic Linear Algebra Subprograms and is a common library specification used in scientific computation. The authors of the position paper propose extending the National Institute of Standards and Technology's Sparse Basic Linear Algebra Subprograms (spBLAS) library to perform graph computations. The position paper served as the latest catalyst for the ongoing research by the SEI's Emerging Technology Center in the field of graph algorithms and heterogeneous high-performance computing (HHPC). This blog post, the second in our series, describes our efforts to create a software library of graph algorithms for heterogeneous architectures that will be released via open source.

As software continues to grow in size and complexity, software programmers continue to make mistakes during development. These mistakes can result in defects in software products and can cause severe damage when the software goes into production. Through the Personal Software Process (PSP), the Carnegie Mellon University Software Engineering Institute has long advocated incorporating discipline and quantitative measurement into the software engineer's initial development work to detect and eliminate defects before the product is delivered to users. This blog post presents an approach for incorporating formal methods with PSP, in particular, Verified Design by Contract, to reduce the number of defects earlier in the software development lifecycle while preserving or improving productivity.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in software assurance, social networking tools, insider threat, and the Security Engineering Risk Analysis Framework (SERA). This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

This blog post is the sixth in a series on Agile adoption in regulated settings, such as the Department of Defense, Internal Revenue Service, and Food and Drug Administration.

"Across the government, we've decreased the time it takes across our high-impact investments to deliver functionality by 20 days over the past year alone. That is a big indicator that agencies across the board are adopting agile or agile-like practices," Lisa Schlosser, acting federal chief information officer, said in a November 2014 interview with Federal News Radio. Schlosser based her remarks on data collected by the Office of Management and Budget (OMB) over the last year. In 2010, the OMB issued guidance calling on federal agencies to employ "shorter delivery time frames, an approach consistent with Agile" when developing or acquiring IT. As evidenced by the OMB data, Agile practices can help federal agencies and other organizations design and acquire software more effectively, but they need to understand the risks involved when contemplating the use of Agile. This ongoing series on Readiness & Fit Analysis (RFA) focuses on helping federal agencies and other organizations in regulated settings understand the risks involved when contemplating or embarking on a new approach to developing or acquiring software. Specifically, this blog post, the sixth in a series, explores issues related to system attributes organizations should consider when adopting Agile.

Attacks and disruptions to complex supply chains for information and communications technology (ICT) and services are increasingly gaining attention. Recent incidents, such as the Target breach, the HAVEX series of attacks on the energy infrastructure, and the recently disclosed series of intrusions affecting DoD TRANSCOM contractors, highlight supply chain risk management as a cross-cutting cybersecurity problem. This risk management problem goes by different names, for example, Supply Chain Risk Management (SCRM) or Risk Management for Third Party Relationships. The common challenge, however, is having confidence in the security practices and processes of entities on which an organization relies, when the relationship with those entities may be, at best, an arms-length agreement. This blog post highlights supply chain risks faced by the Department of Defense (DoD), federal civilian agencies, and industry; argues that these problems are more alike than different across these sectors; and introduces practices to help organizations better manage these risks.