Archive: 2014-01

Over the years, software architects and developers have designed many methods and metrics to evaluate software complexity and its impact on quality attributes, such as maintainability, quality, and performance. Existing studies and experiences have shown that highly complex systems are harder to understand, maintain, and upgrade. Managing software complexity is therefore useful, especially for software that must be maintained for many years.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today we are highlighting a recent post by Will Dormann, a senior member of the technical staff in the SEI's CERT Division, from the CERT/CC Blog. This post describes a few of the more interesting cases that Dormann has encountered in his work investigating attack vectors for potential vulnerabilities. An attack vector is the method that malicious code uses to propagate itself or infect a computer to deliver a payload or harmful outcome by exploiting system vulnerabilities.

A zero-day vulnerability refers to a software security vulnerability that has been exploited before any patch is published. In the past, vulnerabilities were widely exploited even when a patch was available, which means they were not zero-day. Today, zero-day vulnerabilities are common. Notorious examples include the recent Stuxnet and Operation Aurora exploits. Vulnerabilities may arise from a variety of sources, but most vulnerabilities are the result of simple coding errors. Consequently, developers need to understand common traps and pitfalls in the programming language, libraries, and platform to produce code that is free of vulnerabilities.

Software development teams often view software security as an afterthought, something that can be added on after the product is fully functional. Although this approach may have made some sense in the past, today it's largely seen as a mistake since it can lead to unanticipated vulnerabilities in released code. DevOps provides a mechanism for change and enforcement when it comes to security. DevOps practitioners should find it natural to integrate a security focus into development iterations by adding security tests to their continuous integrationprocess. Continuous integration is the practice of merging all development versions of a code base several times a day. This practice provides the same level of automated enforcement for security attributes as for other functional and non-functional attributes, ultimately leading to more secure, robust software systems.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in malware analysis, acquisition strategies, network situational awareness, resilience management (with three reports from this research area), incident management, and future architectures. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Tension and disconnects between software and systems engineering functions are not new. Grady Campbell wrote in 2004 that "systems engineering and software engineering need to overcome a conceptual incompatibility (physical versus informational views of a system)" and that systems engineering decisions can create or contribute to software risk if they "prematurely over-constrain software engineering choices" or "inadequately communicate information, including unknowns and uncertainties, needed for effective software engineering." This tension holds true for Department of Defense (DoD) programs as well, which historically decompose systems from the system level down to subsystem behavior and breakdown work for the program based on this decomposition. Hardware-focused views are typically deemed not appropriate for software, and some systems engineers (and most systems engineering standards) have not yet adopted an integrated view of the two disciplines. An integrated view is necessary, however, because in complex software-reliant systems, software components often interact with multiple hardware components at different levels of the system architecture. In this blog post, I describe recently published research conducted by me and other members of the SEI's Client Technical Solutions Division highlighting interactions on DoD programs between Agile software-development teams and their systems engineering counterparts in the development of software-reliant systems.

What is DevOps?

By on in

In a previous post, we defined DevOps as ensuring collaboration and integration of operations and development teams through the shared goal of delivering business value. Typically, when we envision DevOps implemented in an organization, we imagine a well-oiled machine that automates

  • infrastructure provisioning
  • code testing
  • application deployment

Ultimately, these practices are a result of applying DevOps methods and tools. DevOps works for all sizes, from a team of one to an enterprise organization.

Earlier this month, the U.S. Postal Service reported that hackers broke into their computer system and stole data records associated with 2.9 million customers and 750,000 employees and retirees, according to reports on the breach. In the JP Morgan Chase cyber breach earlier this year, it was reported that hackers stole the personal data of 76 million households as well as information from approximately 8 million small businesses. This breach and other recent thefts of data from Adobe (152 million records), EBay (145 million records), and The Home Depot (56 million records) highlight a fundamental shift in the economic and operational environment, with data at the heart of today's information economy. In this new economy, it is vital for organizations to evolve the manner in which they manage and secure information. Ninety percent of the data that is processed, stored, disseminated, and consumed in the world today was created in the past two years. Organizations are increasingly creating, collecting, and analyzing data on everything (as exemplified in the growth of big data analytics). While this trend produces great benefits to businesses, it introduces new security, safety, and privacy challenges in protecting the data and controlling its appropriate use. In this blog post, I will discuss the challenges that organizations face in this new economy, define the concept of information resilience, and explore the body of knowledge associated with the CERT Resilience Management Model (CERT-RMM) as a means for helping organizations protect and sustain vital information.

Melvin Conway, an eminent computer scientist and programmer, create Conway's Law, which states: Organizations that design systems are constrained to produce designs which are copies of the communication structures of these organizations. Thus, a company with frontend, backend, and database teams might lean heavily towards three-tier architectures. The structure of the application developed will be determined, in large part, by the communication structure of the organization developing it. In short, form is a product of communication.

Soldiers in battle or emergency workers responding to a disaster often find themselves in environments with limited computing resources, rapidly-changing mission requirements, high levels of stress, and limited connectivity, which are often referred to as "tactical edge environments." These types of scenarios make it hard to use mobile software applications that would be of value to a soldier or emergency personnel, including speech and image recognition, natural language processing, and situational awareness, since these computation-intensive tasks take a heavy toll on a mobile device's battery power and computing resources. As part of the Advanced Mobile Systems Initiative at the Carnegie Mellon University Software Engineering Institute (SEI), my research has focused on cyber foraging, which uses discoverable, forward-deployed servers to extend the capabilities of mobile devices by offloading expensive (battery draining) computations to more powerful resources that can be accessed in the cloud, or for staging data particular to a mission. This blog post is the latest installment in a serieson how my research uses tactical cloudlets as a strategy for providing infrastructure to support computation offload and data staging at the tactical edge.

A DevOps approach must be specifically tailored to an organization, team, and project to reflect the business needs of the organization and the goals of the project.

Software developers focus on topics such as programming, architecture, and implementation of product features. The operations team, conversely, focuses on hosting, deployment, and system sustainment. All professionals naturally consider their area of expertise first and foremost when discussing a topic. For example, when discussing a new feature a developer may first think "How can I implement that in the existing code base?" whereas an operations engineer may initially consider "How could that affect the load on our servers?"

DevOps is a software development approach that brings development and operations staff (IT) together. The approach unites previously siloed organizations that tend to cooperate only when their interests converge, resulting in an inefficient and expensive struggle to release a product. DevOps is exactly what the founders of the Agile Manifesto envisioned: a nimble, streamlined process for developing and deploying software while continuously integrating feedback and new requirements. Since 2011, the number of organizations adopting DevOps has increased by 26 percent. According to recent research, those organizations adopting DevOps ship code 30 times faster. Despite its obvious benefits, I still encounter many organizations that hesitate to embrace DevOps. In this blog post, I am introducing a new series that will offer weekly guidelines and practical advice to organizations seeking to adopt the DevOps approach.

In an era of sequestration and austerity, the federal government is seeking software reuse strategies that will allow them to move away from stove-piped development toward open, reusable architectures. The government is also motivated to explore reusable architectures for purposes beyond fiscal constraints: to leverage existing technology, curtail wasted effort, and increase capabilities rather than reinventing them. An open architecture in a software system adopts open standards that support a modular, loosely coupled, and highly cohesive system structure that includes the publication of key interfaces within the system and full design disclosure.

When we verify a software program, we increase our confidence in its trustworthiness. We can be confident that the program will behave as it should and meet the requirements it was designed to fulfill. Verification is an ongoing process because software continuously undergoes change. While software is being created, developers upgrade and patch it, add new features, and fix known bugs. When software is being compiled, it evolves from program language statements to executable code. Even during runtime, software is transformed by just-in-time compilation. Following every such transformation, we need assurance that the change has not altered program behavior in some unintended way and that important correctness and security properties are preserved. The need to re-verify a program after every change presents a major challenge to practitioners--one that is central to our research. This blog post describes solutions that we are exploring to address that challenge and to raise the level of trust that verification provides.

With the rise of multi-core processors, concurrency has become increasingly common. The broader use of concurrency, however, has been accompanied by new challenges for programmers, who struggle to avoid race conditions and other concurrent memory access hazards when writing multi-threaded programs. The problem with concurrency is that many programmers have been trained to think sequentially, so when multiple threads execute concurrently, they struggle to visualize those threads executing in parallel. When two threads attempt to access the same unprotected region of memory concurrently (one reading, one writing) logical inconsistencies can arise in the program, which can yield security concerns that are hard to detect.

Given that up to 70 percent of system errors are introduced during the design phase, stakeholders need a modeling language that will ensure both requirements enforcement during the development process and the correct implementation of these requirements. Previous work demonstrates that using the Architecture Analysis & Design Language (AADL) early in the development process not only helps detect design errors before implementation, but also supports implementation efforts and produces high-quality code. Our latest blog posts and a recent webinarhave shown how AADL can identify potential design errors and avoid propagating them through the development process. Verified specifications, however, are still implemented manually.

Insider threat is the threat to organization's critical assets posed by trusted individuals - including employees, contractors, and business partners - authorized to use the organization's information technology systems. Insider threat programs within an organization help to manage the risks due to these threats through specific prevention, detection, and response practices and technologies. The National Industrial Security Program Operating Manual (NISPOM), which provides baseline standards for the protection of classified information, is considering proposed changes that would require contractors that engage with federal agencies, which process or access classified information, to establish insider threat programs.

More and more, suppliers of software-reliant Department of Defense (DoD) systems are moving away from traditional waterfall development practices in favor of agile methods. As described in previous posts on this blog, agile methods are effective for shortening delivery cycles and managing costs. If the benefits of agile are to be realized effectively for the DoD, however, personnel responsible for overseeing software acquisitions must be fluent in metrics used to monitor these programs. This blog post highlights the results of an effort by researchers at the Carnegie Mellon University Software Engineering Instituteto create a reference for personnel who oversee software development acquisition for major systems built by developers applying agile methods. This post also presents seven categories for tracking agile metrics.

As recent news attests, the rise of sociotechnical ecosystems (STE)--which, we define as a software system that engages a large and geographically-distributed community in a shared pursuit--allows us to work in a mind space and a data space that extends beyond anything that we could have imagined 20 or 30 years ago. STEs present opportunities for tackling problems that could not have even been approached previously because the needed experts and data are spread across multiple locations and distance.

Continuous delivery practices, popularized in Jez Humble's 2010 book Continuous Delivery, enable rapid and reliable software system deployment by emphasizing the need for automated testing and building, as well as closer cooperation between developers and delivery teams. As part of the Carnegie Mellon University Software Engineering Institute's (SEI) focus on Agile software development, we have been researching ways to incorporate quality attributes into the short iterations common to Agile development.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in assuring software reliability, future architectures, Agile software teams, insider threat, and HTML5. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

According to a 2013 report examining 25 years of vulnerabilities (from 1998 to 2012), buffer overflow causes 14 percent of software security vulnerabilities and 35 percent of critical vulnerabilities, making it the leading cause of software security vulnerabilities overall. As of July 2014, the TIOBE index indicates that the C programming language, which is the language most commonly associated with buffer overflows, is the most popular language with 17.1 percent of the market. Embedded systems, network stacks, networked applications, and high-performance computing rely heavily upon C. Embedded systems can be especially vulnerable to buffer overflows because many of them lack hardware memory management units. This blog post describes my research on the Secure Coding Initiative in the CERT Division of the Carnegie Mellon University Software Engineering Instituteto create automated buffer overflow prevention.

In today's systems it's very hard to know where systems end and software begins. Software performs an integrating function in many systems, often serving as the glue interconnecting other system elements. We also find that many of the problems in software systems have their roots in systems engineering, which is an interdisciplinary field that focuses on how to design and manage complex systems over their life cycles. For that reason, staff at the Carnegie Mellon University Software Engineering Institute (SEI) often conduct research in the systems engineering realm. Process frameworks, architecture development and evaluation methods, and metrics developed for software are routinely adapted and applied to systems. Better systems engineering supports better software development, and both support better acquisition project performance. This blog post, the latest in a serieson this research, analyzes project performance based on systems engineering activities in the defense and non-defense industries.

The term big data is a subject of much hype in both government and business today. Big data is variously the cause of all existing system problems and, simultaneously, the savior that will lead us to the innovative solutions and business insights of tomorrow. All this hype fuels predictions such as the one from IDC that the market for big data will reach $16.1 billion in 2014, growing six times faster than the overall information technology market, despite the fact that the "benefits of big data are not always clear today," according to IDC. From a software-engineering perspective, however, the challenges of big data are very clear, since they are driven by ever-increasing system scale and complexity. This blog post, a continuation of my last poston the four principles of building big data systems, describes how we must address one of these challenges, namely, you can't manage what you don't monitor.

Organizations are continually fending off cyberattacks in one form or another. The 2014 Verizon Data Breach Investigations Report, which included contributions from SEI researchers, tagged 2013 as "the year of the retailer breach." According to the report, 2013 also witnessed "a transition from geopolitical attacks to large-scale attacks on payment card systems." To illustrate the trend, the report outlines a 12-month chronology of attacks, including a January "watering hole" attack on the Council on Foreign Relations website followed in February by targeted cyber-espionage attacks against The New York Times and The Wall Street Journal. The well-documented Target breachbrought 2013 to a close with the theft of more than 40 million debit and credit card numbers. This blog post highlights a recent research effort to create a taxonomy that provides organizations a common language and set of terminology they can use to discuss, document, and mitigate operational cybersecurity risks.

The role of software within systems has fundamentally changed over the past 50 years. Software's role has changed both on mission-critical DoD systems, such as fighter aircraft and surveillance equipment, and on commercial products, such as telephones and cars. Software has become not only the brain of most systems, but the backbone of their functionality. Acquisition processes must acknowledge this new reality and adapt. This blog posting, the second in a series about the relationship of software engineering (SwE) and systems engineering (SysE), shows how software technologies have come to dominate what formerly were hardware-based systems. This posting describes a case study: the story of software on satellites, whose lessons can be applied to many other kinds of software-reliant systems.

Many warfighters and first responders operate at what we call "the tactical edge," where users are constrained by limited communication connectivity, storage availability, processing power, and battery life. In these environments, onboard sensors are used to capture data on behalf of mobile applications to perform tasks such as face recognition, speech recognition, natural language translation, and situational awareness. These applications then rely on network interfaces to send the data to nearby servers or the cloud if local processing resources are inadequate. While software developers have traditionally used native mobile technologies to develop these applications, the approach has some drawbacks, such as limited portability. In contrast, HTML5 has been touted for its portability across mobile device platforms, as well an ability to access functionality without having to download and install applications. This blog post describes research aimed at evaluating the feasibility of using HTML5 to develop applications that can meet tactical edge requirements.

In earlier posts on big data, I have written about how long-held design approaches for software systems simply don't work as we build larger, scalable big data systems. Examples of design factors that must be addressed for success at scale include the need to handle the ever-present failures that occur at scale, assure the necessary levels of availability and responsiveness, and devise optimizations that drive down costs. Of course, the required application functionality and engineering constraints, such as schedule and budgets, directly impact the manner in which these factors manifest themselves in any specific big data system. In this post, the latest in my ongoing series on big data, I step back from specifics and describe four general principles that hold for any scalable, big data system. These principles can help architects continually validate major design decisions across development iterations, and hence provide a guide through the complex collection of design trade-offs all big data systems require.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in secure coding, CERT Resilience Management Model, malicious-code reverse engineering, systems engineering, and incident management. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

In the first half of this year, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in big data, secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In the first six months of 2014 (through June 20), the SEI blog has logged 60,240 visits, which is nearly comparable with the entire 2013 yearly total of 66,757 visits. As we reach the mid-year point, this blog posting takes a look back at our most popular areas of work (at least according to you, our readers) and highlights our most popular blog posts for the first half of 2014, as well as links to additional related resources that readers might find of interest.

Federal agencies depend on IT to support their missions and spent at least $76 billion on IT in fiscal year 2011, according to a report from the Government Accountability Office (GAO). The catalyst for the study was congressional concern over prior IT expenditures that produced disappointing results, including multimillion dollar cost overruns and schedule delays measured in years, with questionable mission-related achievements. The Office of Management and Budget (OMB) in 2010 issued guidance that advocates federal agencies employ "shorter delivery time frames, an approach consistent with Agile." This ongoing series on the Readiness & Fit Analysis (RFA) approach focuses on helping federal agencies and other organizations understand the risks involved when contemplating or embarking on the adoption of new practices, such as Agile methods. This blog posting, the fifth in this series, explores the Practices category, which helps organizations understand which Agile practices are already in use to formulate a more effective adoption strategy.

Introducing new software languages, tools, and methods in industrial and production environments incurs a number of challenges. Among other necessary changes, practices must be updated, and engineers must learn new methods and tools. These updates incur additional costs, so transitioning to a new technology must be carefully evaluated and discussed. Also, the impact and associated costs for introducing a new technology vary significantly by type of project, team size, engineers' backgrounds, and other factors, so that it is hard to estimate the real acquisition costs. A previous post in our ongoing series on the Architecture Analysis and Design Language (AADL) described the use of AADL in research projects (such as System Architectural Virtual Integration (SAVI)) in which experienced researchers explored the language capabilities to capture and analyze safety-critical systems from different perspectives. These successful projects have demonstrated the accuracy of AADL as a modeling notation. This blog post presents research conducted independently of the SEI that aims to evaluate the safety concerns of several unmanned aerial vehicle (UAV) systems using AADL and the SEI safety analysis tools implemented in OSATE.

The Wireless Emergency Alerts (WEA) service went online in April 2012, giving emergency management agencies such as the National Weather Service or a city's hazardous materials team a way to send messages to mobile phone users located in a geographic area in the event of an emergency. Since the launch of the WEA service, the newest addition to the Federal Emergency Management Agency (FEMA) Integrated Public Alert and Warning System (IPAWS),"trust" has emerged as a key issue for all involved. Alert originators at emergency management agencies must trust WEA to deliver alerts to the public in an accurate and timely manner. The public must also trust the WEA service before it will act on the alerts. Managing trust in WEA is a responsibility shared among many stakeholders who are engaged with WEA. This blog post, the first in a series, highlights recent research aimed at enhancing both the trust of alert originators in the WEA service and the public's trust in the alerts it receives.

To maintain a competitive edge, software organizations should be early adopters of innovation. To achieve this edge, organizations from Flickr and IBM to small tech startups are increasingly adopting an environment of deep collaboration between development and operations (DevOps) teams and technologies, which historically have been two disjointed groups responsible for information technology development. "The value of DevOps can be illustrated as an innovation and delivery lifecycle, with a continuous feedback loop to learn and respond to customer needs," Ashok Reddy writes in the technical white paper, DevOps: The IBM approach.

The Government Accountability Office (GAO) recently reported that acquisition program costs typically run 26 percent over budget, with development costs exceeding initial estimates by 40 percent. Moreover, many programs fail to deliver capabilities when promised, experiencing a 21-month delay on average. The report attributes the "optimistic assumptions about system requirements, technology, and design maturity [that] play a large part in these failures" to a lack of disciplined systems engineering analysis early in the program. What acquisition managers do not always realize is the importance of focusing on software engineering during the early systems engineeringeffort. Improving on this collaboration is difficult partly because both disciplines appear in a variety of roles and practices. This post, the first in a series, addresses the interaction between systems and software engineering by identifying the similarities and differences between the two disciplines and describing the benefits both could realize through a more collaborative approach.

The Heartbleed bug, a serious vulnerability in the Open SSL crytographic software library, enables attackers to steal information that, under normal conditions, is protected by the Secure Socket Layer/Transport Layer Security(SSL/TLS) encryption used to secure the internet. Heartbleed and its aftermath left many questions in its wake:

  • Would the vulnerability have been detected by static analysis tools?
  • If the vulnerability has been in the wild for two years, why did it take so long to bring this to public knowledge now?
  • Who is ultimately responsible for open-source code reviews and testing?
  • Is there anything we can do to work around Heartbleed to provide security for banking and email web browser applications?

Software developers produce more than 100 billion lines of code for commercial systems each year. Even with automated testing tools, errors still occur at a rate of one error for every 10,000 lines of code. While many coding standards address code style issues (i.e., style guides), CERT secure coding standards focus on identifying unsafe, unreliable, and insecure coding practices, such as those that resulted in the Heartbleed vulnerability. For more than 10 years, the CERT Secure Coding Initiative at the Carnegie Mellon University Software Engineering Institutehas been working to develop guidance--most recently, The CERT C Secure Coding Standard: Second Edition--for developers and programmers through the development of coding standards by security researchers, language experts, and software developers using a wiki-based community process. This blog post explores the importance of a well-documented and enforceable coding standard in helping programmers circumvent pitfalls and avoid vulnerabilities.

This blog post is co-authored by Lori Flynn.

Although the Android Operating System continues to dominate the mobile device market (82 percent of worldwide market share in the third quarter of 2013), applications developed for Android have faced some challenging security issues. For example, applications developed for the Android platform continue to struggle with vulnerabilities, such as activity hijacking, which occurs when a malicious app receives a message (in particular, an intent) that was intended for another app but not explicitly designated for it. The attack can result in leakage of sensitive data or loss of secure control of the affected apps. Another vulnerability is exploited when sensitive information is leaked from a sensitive source to a restricted sink. This blog post is the second in a series that details our work to develop techniques and tools for analyzing code for mobile computing platforms. (A previous blog post, Secure Coding for the Android Platform, describes our team's development of Android rules and guidelines.)

Every day, analysts at major anti-virus companies and research organizations are inundated with new malware samples. From Flame to lesser-known strains, figures indicate that the number of malware samples released each day continues to rise. In 2011, malware authors unleashed approximately 70,000 new strains per day, according to figures reported by Eugene Kaspersky. The following year, McAfee reported that 100,000 new strains of malware were unleashed each day. An article published in the October 2013 issue of IEEE Spectrum, updated that figure to approximately 150,000 new malware strains. Not enough manpower exists to manually address the sheer volume of new malware samples that arrive daily in analysts' queues. In our work here at CERT, we felt that analysts needed an approach that would allow them to identify and focus first on the most destructive binary files. This blog post is a follow up of my earlier post entitled Prioritizing Malware Analysis. In this post, we describe the results of the research I conducted with fellow researchers at the Carnegie Mellon University (CMU) Software Engineering Institute (SEI) and CMU's Robotics Institutehighlighting our analysis that demonstrated the validity (with 98 percent accuracy) of our approach, which helps analysts distinguish between the malicious and benign nature of a binary file.

In October 2010, two packages from Yemen containing explosives were discovered on U.S.-bound cargo planes of two of the largest worldwide shipping companies, UPS and FedEx, according to reports by CNN and the Wall Street Journal. The discovery highlighted a long-standing problem--securing international cargo--and ushered in a new area of concern for such entities as the United States Postal Inspection Service (USPIS) and the Universal Postal Union (UPU), a specialized agency of the United Nations that regulates the postal services of 192 member countries. In early 2012, the UPU and several stakeholder organizations developed two standards to improve security in the transport of international mail and to improve the security of critical postal facilities. As with any new set of standards, however, a mechanism was needed to enable implementation of the standards and measure compliance to them. This blog post describes the method developed by researchers in the CERT Division at Carnegie Mellon University's Software Engineering Institute, in conjunction with the USPIS, to identify gaps in the security of international mail processing centers and similar shipping and transportation processing facilities.

The Architecture Analysis and Design Language (AADL) is a modeling language that, at its core, allows designers to specify the structure of a system (components and connections) and analyze its architecture. From a security point of view, for example, we can use AADL to verify that a high-security component does not communicate with a low-security component and, thus, ensure that one type of security leak is prevented by the architecture. The ability to capture the behavior of a component allows for even better use of the model. This blog post describes a tool developed to support the AADL Behavior Annex and allow architects to import behavior from Simulink (or potentially any other notation) into an architecture model.

Social engineering involves the manipulation of individuals to get them to unwittingly perform actions that cause harm or increase the probability of causing future harm, which we call "unintentional insider threat." This blog post highlights recent research that aims to add to the body of knowledge about the factors that lead to unintentional insider threat (UIT) and about how organizations in industry and government can protect themselves.

Although the CERT Secure Coding team has developed secure coding rules and guidelines for Java, prior to 2013 we had not developed a set of secure coding rules that were specific to Java's application in the Android platform. Android is an important area to focus on, given its mobile device market dominance (82 percent of worldwide market share in the third quarter of 2013) as well as the adoption of Android by the Department of Defense. This blog post, the first in a series, discusses the initial development of our Android rules and guidelines. This initial development included mapping our existing Java secure coding rules and guidelines to Android applicability and also the creation of new Android- only rules for Java secure coding.

To view a video of the introduction, please click here.
The Better Buying Power 2.0 initiative is a concerted effort by the United States Department of Defense to achieve greater efficiencies in the development, sustainment, and recompetition of major defense acquisition programs through cost control, elimination of unproductive processes and bureaucracy, and promotion of open competition. This SEI blog posting describes how the Navy is operationalizing Better Buying Power in the context of their Open Systems Architecture and Business Innovation initiatives.

According to a report issued by the Government Accountability Office (GAO) in February 2013, the number of cybersecurity incidents reported that could impact "federal and military operations; critical infrastructure; and the confidentiality, integrity, and availability of sensitive government, private sector, and personal information" has increased by 782 percent--from 5,503 in 2006 to 48,562 in 2012. In that report, GAO also stated that while there has been incremental progress in coordinating the federal response to cyber incidents, "challenges remain in sharing information among federal agencies and key private sector entities, including critical infrastructure owners."

In 2012, the White House released its federal digital strategy. What's noteworthy about this release is that the executive office distributed the strategy using Bootstrap, an open source software (OSS) tool developed by Twitter and made freely available to the public via the code hosting site GitHub. This is not the only evidence that we have seen of increased government interest in OSS adoption. Indeed, the 2013 report The Future of Open Source Software revealed that 34 percent of its respondents were government entities using OSS products.

Although software is increasingly important to the success of government programs, there is often little consideration given to its impact on early key program decisions. The Carnegie Mellon University Software Engineering Institute (SEI) is conducting a multi-phase research initiative aimed at answering the question: is the probability of a program's success improved through deliberately producing a program acquisition strategy and software architecture that are mutually constrained and aligned?

Code clones are implementation patterns transferred from program to program via copy mechanisms including cut-and-paste, copy-and-paste, and code-reuse. As a software engineering practice there has been significant debate about the value of code cloning. In its most basic form, code cloning may involve a codelet (snippets of code) that undergoes various forms of evolution, such as slight modification in response to problems. Software reuse quickens the production cycle for augmented functions and data structures. So, if a programmer copies a codelet from one file into another with slight augmentations, a new clone has been created stemming from a founder codelet. Events like these constitute the provenance or historical record of all events affecting a codelet object. This blog posting describes exploratory research that aims to understand the evolution of source and machine code and, eventually, create a model that can recover relationships between codes, files, or executable formats where the provenance is not known.

As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in systems of systems integration from an architectural perspective, unintentional insider threat that derives from social engineering, identifying physical security gaps in international mail processing centers and similar facilities, countermeasures used by cloud service providers, the Team Software Process (TSP), and key automation and analysis techniques. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

The process of designing and analyzing software architectures is complex. Architectural design is a minimally constrained search through a vast multi-dimensional space of possibilities. The end result is that architects are seldom confident that they have done the job optimally, or even satisfactorily. Over the past two decades, practitioners and researchers have used architectural patterns to expedite sound software design. Architectural patterns are prepackaged chunks of design that provide proven structural solutions for achieving particular software system quality attributes, such as scalability or modifiability. While use of patterns has simplified the architectural design process somewhat, key challenges remain. This blog explores these challenges and our solutions for achieving system security qualities through use of patterns.

Many types of software systems, including big data applications, lend them themselves to highly incremental and iterative development approaches. In essence, system requirements are addressed in small batches, enabling the delivery of functional releases of the system at the end of every increment, typically once a month. The advantages of this approach are many and varied. Perhaps foremost is the fact that it constantly forces the validation of requirements and designs before too much progress is made in inappropriate directions. Ambiguity and change in requirements, as well as uncertainty in design approaches, can be rapidly explored through working software systems, not simply models and documents. Necessary modifications can be carried out efficiently and cost-effectively through refactoring before code becomes too 'baked' and complex to easily change. This posting, the second in a series addressing the software engineering challenges of big data, explores how the nature of building highly scalable, long-lived big data applications influences iterative and incremental design approaches.

Software used in safety-critical systems--such as automotive, avionics, and healthcare applications, where failures could result in serious harm or loss of life--must perform as prescribed. How can software developers and programmers offer assurance that the system will perform as needed and with what level of confidence? In the first post in this series, I introduced the concept of the assurance case as a means to justify safety, security, or reliability claims by relating evidence to the claim via an argument. In this post I will discuss Baconian probability and eliminative induction, which are concepts we use to explore properties of confidence that the assurance case adequately justifies its claim about the subject system.