Category: Testing

In a previous post, I addressed the testing challenges posed by non-deterministic systems and software such as the fact that the same test can have different results when repeated. While there is no single panacea for eliminating these challenges, this blog posting describes a number of measures that have proved useful when testing non-deterministic systems.

Many system and software developers and testers, especially those who have primarily worked in business information systems, assume that systems--even buggy systems--behave in a deterministic manner. In other words, they assume that a system or software application will always behave in exactly the same way when given identical inputs under identical conditions. This assumption, however, is not always true. While this assumption is most often false when dealing with cyber-physical systems, new and even older technologies have brought various sources of non-determinism, and this has significant ramifications on testing. This blog post, the first in a series, explores the challenges of testing in a non-deterministic world.

In 2015, the SEI blog launched a redesigned platform to make browsing easier, and our content areas more accessible and easier to navigate. The SEI Blog audience also continued to grow with an ever-increasing number of visitors learning more about our research in technical debt, shift-left testing, graph analytics, DevOps, secure coding, and malware analysis. In 2015 (from January 1 through December 15), the SEI blog logged 159,604 visits and sessions (we also switched analytics platforms mid-year), a 26 percent increase in traffic from the previous year. This blog post highlights the top 10 posts published in 2015. As we did with our mid-year review, we will include links to additional related resources that readers might find of interest. We also will present the posts in descending order beginning with the 10th most popular post of 2015 and counting down to number one.

10. Ten Recommended Practices for Achieving Agile at Scale
9. Open System Architectures: When and Where to Be Closed
8. A Taxonomy of Testing
7. Is Java More Secure Than C?
6. Managing Software Complexity in Models
5. The Pharos Framework: Binary Static Analysis of Object Oriented Code
4. Developing a Software Library for Graph Analytics
3. Four Types of Shift-Left Testing
2. DevOps Technologies: Fabric or Ansible
1. A Field Study of Technical Debt

By Donald Firesmith
Principal Engineer
Software Solutions Division

There are more than 200 different types of testing, and many stakeholders in testing--including the testers themselves and test managers--are often largely unaware of them or do not know how to perform them. Similarly, test planning frequently overlooks important types of testing. The primary goal of this series of blog posts is to raise awareness of the large number of test types, to verify adequate completeness of test planning, and to better guide the testing process. In the previous blog entry in this series, I introduced a taxonomy of testing in which 15 subtypes of testing were organized around how they addressed the classic 5W+2H questions: what, when, why, who, where, how, and how well. This and future postings in this series will cover each of these seven categories of testing, thereby providing structure to roughly 200 types of testing currently used to test software-reliant systems and software applications.

This blog entry covers the four, top-level subtypes of testing that answer the following questions:

  • What-based testing: What is being tested?
    • Object-Under-Test-based (OUT) testing
    • Domain-based testing
    • When-based testing: When is the testing being performed?
      • Order-based testing
      • Lifecycle-based testing
      • Phase-based testing
      • Built-in-Test (BIT) testing

After exploring the what-based and when-based categories of testing, this post presents a section on using the taxonomy, as well as opportunities for accessing it.

By Donald Firesmith
Principal Engineer
Software Solutions Division

While evaluating the test programs of numerous defense contractors, we have often observed that they are quite incomplete. For example, they typically fail to address all the relevant types of testing that should be used to (1) uncover defects (2) provide evidence concerning the quality and maturity of the system or software under test, and (3) demonstrate the readiness of the system or software for acceptance and being placed into operation. Instead, many test programs only address a relatively small subset of the total number of potentially relevant types of testing, such as unit testing, integration testing, system testing, and acceptance testing. In some cases, the missing testing types are actually performed (to some extent) but not addressed in test-related planning documents, such as test strategies, system and software test plans (STPs), and the testing sections of systems engineering management plans (SEMPs) and software development plans (SDP). In many cases, however, they are neither mentioned nor performed. This blog, post, the first in a series on the many types of testing, examines the negative consequences of not addressing all relevant testing types and introduces a taxonomy of testing types to help testing stakeholders understand--rather than overlook--them.

The SEI Blog continues to attract an ever-increasing number of readers interested in learning more about our work in agile metrics, high-performance computing, malware analysis, testing, and other topics. As we reach the mid-year point, this blog posting highlights our 10 most popular posts, and links to additional related resources you might find of interest (Many of our posts cover related research areas, so we grouped them together for ease of reference.)

Before we take a deeper dive into the posts, let's take a look at the top 10 posts (ordered by number of visits, with #1 being the highest number of visits):

One of the most important and widely discussed trends within the software testing community is shift left testing, which simply means beginning testing as early as practical in the lifecycle. What is less widely known, both inside and outside the testing community, is that testers can employ four fundamentally-different approaches to shift testing to the left. Unfortunately, different people commonly use the generic term shift left to mean different approaches, which can lead to serious misunderstandings. This blog post explains the importance of shift left testing and defines each of these four approaches using variants of the classic V model to illustrate them.

In the first half of this year, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in big data, secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In the first six months of 2014 (through June 20), the SEI blog has logged 60,240 visits, which is nearly comparable with the entire 2013 yearly total of 66,757 visits. As we reach the mid-year point, this blog posting takes a look back at our most popular areas of work (at least according to you, our readers) and highlights our most popular blog posts for the first half of 2014, as well as links to additional related resources that readers might find of interest.

The verification and validation of requirements are a critical part of systems and software engineering. The importance of verification and validation (especially testing) is a major reason that the traditional waterfall development cycle underwent a minor modification to create the V modelthat links early development activities to their corresponding later testing activities. This blog post introduces three variants on the V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.

In the first blog entry of this two part series on common testing problems, I addressed the fact that testing is less effective, less efficient, and more expensive than it should be. This second posting of a two-part serieshighlights results of an analysis that documents problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories; lists potential symptoms by which each can be recognized; potential negative consequences, and potential causes; and makes recommendations for preventing them or mitigating their effects.

A widely cited study for the National Institute of Standards & Technology (NIST) reports that inadequate testing methods and tools annually cost the U.S. economy between $22.2 and $59.5 billion, with roughly half of these costs borne by software developers in the form of extra testing and half by software users in the form of failure avoidance and mitigation efforts. The same study notes that between 25 and 90 percent of software development budgets are often spent on testing. This posting, the first in a two-part series, highlights results of an analysis that documents problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories, lists potential symptoms by which each can be recognized, potential negative consequences, potential causes, and makes recommendations for preventing them or mitigating their effects.

Testing plays a critical role in the development of software-reliant systems. Even with the most diligent efforts of requirements engineers, designers, and programmers, faults inevitably occur. These faults are most commonly discovered and removed by testing the system and comparing what it does to what it is supposed to do. This blog posting summarizes a method that improves testing outcomes (including efficacy and cost) in a software-reliant system by using an architectural design approach, which describes a coherent set of architectural decisions taken by architects to help meet the behavioral and quality attribute requirements of systems being developed.