icon-carat-right menu search cmu-wordmark

What to Fix? Distinguishing Between Design and Non-design Rules in Automated Tools

Conference Paper
This paper describes an empirical study using a structured categorization approach to manually classify 466 software quality rules from three industry tools.
Publisher

IEEE

Abstract

This conference paper appears in the Proceedings of the IEEE International Conference on Software Architecture (ICSA 2017).

Design problems, frequently the result of optimizing for delivery speed, are a critical part of long-term software costs. Automatically detecting such design issues is a high priority for software practitioners. Software quality tools promise automatic detection of common software quality rule violations. However, since these tools bundle a number of rules, including rules for code quality, it is hard for users to understand which rules identify design issues in particular. Research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classified 466 software quality rules from three industry tools—CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either non-design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, to support automatic detection.

Supplemental Materials

Design Rule Classification Rubric and Guidance: This document provides detailed guidance for how to use our classification scheme to identify design rules in static analysis tools.

Design Rules and Analysis Spreadsheet: This Excel document contains lists of design rules from the static analysis tools analyzed in this study as well as final reconciliation results.