Recent research has demonstrated that in large scale software systems, bugs seldom exist in isolation. As detailed in a previous post in this series, bugs are often architecturally connected. These architectural connections are design flaws. Static analysis tools cannot find many of these flaws, so they are typically not addressed early in the software development lifecycle. Such flaws, if they are detected at all, are found after the software has been in use; at this point they are far more costly and time-consuming to address.
In our first post in this series, we presented a tool that supports a new architecture model that can identify structures in the design (based on an analysis of a project's code base) that have a high likelihood of containing bugs. Typically, investment in refactoring to remove such design flaws has been difficult for architects to justify or quantify to their managers. The costs of refactoring are immediate and up-front. The benefits have, in the past, been vague and long-term. And so managers typically refuse to invest in refactoring.
In this post, which was excerpted from a recently published paper that I coauthored with Yuanfang Cai, Ran Mo, Qiong Feng, Lu Xiao, Serge Haziyev, Volodymyr Fedak, and Andriy Shapochka, we present a case study of our approach with SoftServe Inc., a leading software outsourcing company. In this case study we show how we can represent architectural technical debt (hereinafter, architectural debt) concretely, in terms of its cost and schedule implications (issues of importance that are easily understood by project managers). In doing so, we can create business cases to justify refactoring to remove root causes of the architectural debt.
In today's increasingly interconnected world, the information security community must be prepared to address vulnerabilities that may arise from new technologies. Understanding trends in emerging technologies can help information security professionals, leaders of organizations, and others interested in information security identify areas for further study. Researchers in the SEI's CERT Division recently examined the security of a large swath of technology domains being developed in industry and maturing over the next five years. Our team of analysts--Dan Klinedinst, Todd Lewellen, Garret Wassermann, and I--focused on identifying domains that not only impacted cybersecurity, but finance, personal health, and safety, as well. This blog post highlights the findings of our report prepared for the Department of Homeland Security United States Computer Emergency Readiness Team (US-CERT) and provides a snapshot of our current understanding of future technologies.
As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports, technical notes, and white papers. These reports highlight the latest work of SEI technologists in estimating program costs early in the development lifecycle, threat analysis mapping, risks and vulnerabilities in connected vehicles, emerging technologies, and cyber-foraging. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.
Organizations and federal agencies seeking to adopt Agile often struggle because they do not understand the adoption risks involved when contemplating the use of Agile approaches. This ongoing series on Readiness and Fit Analysis (RFA) focuses on helping federal agencies, such as the Department of Defense, the Internal Revenue Service, the Food and Drug Administration, and other organizations in regulated settings, understand the risks involved when contemplating or embarking on a new approach to developing or acquiring software. This blog post, the seventh in a series, explores issues related to the technology environment that organizations should consider when adopting Agile approaches.
Dynamic Network Defense (or Moving Target Defense) is based on a simple premise: a moving target is harder to attack than a stationary target. In recent years the government has invested substantially into moving target and adaptive cyber defense. This rapidly growing field has seen recent developments of many new technologies--defenses that range from shuffling of client-to-server assignments to protect against distributed denial-of-service (DDOS) attacks, to packet header rewriting, to rebooting servers. As researchers develop new technologies, they need a centralized reference platform where new technologies can be vetted to see where they complement each other and where they do not, as well as a standard against which future technologies can be evaluated. This blog post describes work led by researchers at the SEI's Emerging Technology Center (ETC) to develop a secure, easy-to-use, consistent development and deployment path to organize dynamic defenses.
Much of the malware that we analyze includes some type of remote access capability. Malware analysts broadly refer to this type of malware as a remote access tool (RAT). RAT-like capabilities are possessed by many well-known malware families, such as DarkComet. As described in this series of posts, CERT researchers are exploring ways to automate common malware analysis activities. In a previous post, I discussed the Pharos Binary Analysis Framework and tools to support reverse engineering object-oriented code. In this post, I will explain how to statically characterize program behavior using application programming interface (API) calls and then discuss how we automated this reasoning with a malware analysis tool that we call ApiAnalyzer.
In cyber systems, the identities of devices can easily be spoofed and are frequent targets of cyber-attacks. Once an identity is fabricated, stolen or spoofed it may be used as a nexus to systems, thus forming a Sybil Attack. To address these and other problems associated with identity deception researchers at the Carnegie Mellon University Software Engineering Institute, New York University's Tandon School of Engineering and Courant Institute of Mathematical Sciences, and the University of Göttingen (Germany), collaborated to develop a deception-resistant identity management system inspired by biological systems; namely, ant colonies. This blog post highlights our research contributions.
This is the third installment in a series of three blog posts highlighting seven recommended practices for acquiring intellectual property. This content was originally published on the Cyber Security & Information Analysis Center's website online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment. The first post in the series explored the challenges to acquiring intellectual property. The second post in the series presented the first four of seven practices for acquiring intellectual property. This post will present the final three of seven practices for acquiring intellectual property as well as conditions under which organizations will derive the most benefit from recommended practices for acquiring intellectual property.