What is technical debt? Why identify technical debt? Shouldn't it be captured as defects and bugs? Concretely communicating technical debt and its consequences is of interest to both researchers and software engineers. Without validated tools and techniques to achieve this goal with repeatable results, developers resort to ad hoc practices, most commonly using issue trackers or backlog-management practices to capture and track technical debt. We examined 1,264 issues from four issue trackers used in open-source industry and government projects and identified 109 examples of technical debt. Our study, documented in the paper Got Technical Debt? Surfacing Elusive Technical Debt in Issue Trackers, revealed that technical debt has entered the vernacular of developers as they discuss development tasks through issue trackers. Even when developers did not explicitly label issues as technical debt, it was possible to identify technical debt items in these issue trackers using a classification method we developed. We use our results to motivate an improved definition of technical debt and an approach to explicitly report it in issue trackers. In this blog post, we describe our classification method and some implications of tracking debt for both practice and research.
This blog post was co-authored by Nancy Mead, SEI Fellow.
To ensure software will function as intended and is free of vulnerabilities (aka software assurance), software engineers must consider security early in the lifecycle, when the system is being designed and architected. Recent research on vulnerabilities supports this claim: Nearly half the weaknesses identified in the Common Weakness Enumeration (CWE) repository have been identified as design weaknesses. These weaknesses are introduced early in the lifecycle and cannot be patched away in later phases. They result from poor (or incomplete) security requirements, system designs, and architecture choices for which security has not been given appropriate priority. Effective use of metrics and methods that apply systematic consideration for security risk can highlight gaps earlier in the lifecycle before the impact is felt and when the cost of addressing these gaps is less. This blog post explores the connection between measurement, methods for software assurance, and security. Specifically, we focus on three early lifecycle methods that have shown promise: the Software Assurance Framework (SAF), Security Quality Requirements Engineering (SQUARE) Methodology, and Security Engineering Risk Analysis (SERA) Framework.
The mix of program-scale Agile and technical baseline ownership drives cheaper, better, and faster deployment of software-intensive systems. Although these practices aren't new, the SEI has seen how their combination can have dramatic effects. The Air Force Distributed Common Ground System (AF DCGS)--the Air Force's primary weapon system for intelligence, surveillance, reconnaissance, planning, direction, collection, processing, exploitation, analysis, and dissemination--employs a global communications architecture that connects multiple intelligence platforms and sensors. The AF DCGS challenge is to bring new sensors and processing applications online quickly and efficiently. Other large government software-intensive systems face similar challenges. The SEI has found that Agile cultural transformation--along with strong technical baseline ownership--is critical for programs like DCGS to deliver new capability faster and with greater confidence. These strategies working together can help create incremental and iterative approaches to deliver more frequent and more manageable technical capability. In this blog, I present the SEI's experiences in helping large Department of Defense (DoD) programs, such as AF DCGS, use concepts like owning the technical baseline and Agile software development techniques to deliver new capability on a regular basis.
In 2015, the National Vulnerability Database (NVD) recorded 6,488 new software vulnerabilities, and the NVD documents a total of 74,885 software vulnerabilities discovered between 1988-2016. Static analysis tools examine code for flaws, including those that could lead to software security vulnerabilities, and produce diagnostic messages ("alerts") indicating the location of the purported flaw in the source code, the nature of the flaw, and often additional contextual information. A human auditor then evaluates the validity of the purported code flaws. The effort required to manually audit all alerts and repair all confirmed code flaws is often too much for a project's budget and schedule. Auditors therefore need tools that allow them to triage alerts, strategically prioritizing the alerts for examination. This blog post describes research we are conducting that uses classification models to help analysts and coders prioritize which alerts to address.
As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports, white papers, webinars, and podcasts. These publications highlight the latest work of SEI technologists in military situational analysis, software architecture, insider threat, honeynets, and threat modeling. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
This blog post was co-authored by Dan Klinedinst.
Automobiles are often referred to as "computers on wheels" with newer models containing more than 100 million lines of code. All this code provides features such as forward collision warning systems and automatic emergency braking to keep drivers safe. This code offers other benefits such as traffic detection, smartphone integration, and enhanced navigation. These features also introduce an increased risk of compromise, as demonstrated by researchers Chris Valasek and Charlie Miller (who work for Uber's Advanced Technology Center in Pittsburgh) in a July 2015 story for Wired, where they hacked into a Jeep Cherokee with a zero-day exploit. The Jeep Hack, as it has come to be known, highlighted an underlying issue for all vehicles: when automobiles are built, manufacturers focus on a threat model of potential risks that rely on physical defects, but do not include vulnerabilities that make a vehicle susceptible to intrusion and remote compromise. This blog post highlights the first phase of our research on making connected vehicles more secure by testing devices that connect into the vehicle itself.
Recent research has demonstrated that in large scale software systems, bugs seldom exist in isolation. As detailed in a previous post in this series, bugs are often architecturally connected. These architectural connections are design flaws. Static analysis tools cannot find many of these flaws, so they are typically not addressed early in the software development lifecycle. Such flaws, if they are detected at all, are found after the software has been in use; at this point they are far more costly and time-consuming to address.
In our first post in this series, we presented a tool that supports a new architecture model that can identify structures in the design (based on an analysis of a project's code base) that have a high likelihood of containing bugs. Typically, investment in refactoring to remove such design flaws has been difficult for architects to justify or quantify to their managers. The costs of refactoring are immediate and up-front. The benefits have, in the past, been vague and long-term. And so managers typically refuse to invest in refactoring.
In this post, which was excerpted from a recently published paper that I coauthored with Yuanfang Cai, Ran Mo, Qiong Feng, Lu Xiao, Serge Haziyev, Volodymyr Fedak, and Andriy Shapochka, we present a case study of our approach with SoftServe Inc., a leading software outsourcing company. In this case study we show how we can represent architectural technical debt (hereinafter, architectural debt) concretely, in terms of its cost and schedule implications (issues of importance that are easily understood by project managers). In doing so, we can create business cases to justify refactoring to remove root causes of the architectural debt.
In today's increasingly interconnected world, the information security community must be prepared to address vulnerabilities that may arise from new technologies. Understanding trends in emerging technologies can help information security professionals, leaders of organizations, and others interested in information security identify areas for further study. Researchers in the SEI's CERT Division recently examined the security of a large swath of technology domains being developed in industry and maturing over the next five years. Our team of analysts--Dan Klinedinst, Todd Lewellen, Garret Wassermann, and I--focused on identifying domains that not only impacted cybersecurity, but finance, personal health, and safety, as well. This blog post highlights the findings of our report prepared for the Department of Homeland Security United States Computer Emergency Readiness Team (US-CERT) and provides a snapshot of our current understanding of future technologies.