search menu icon-carat-right cmu-wordmark

The Top 10 Blog Posts of 2019

Douglas C. Schmidt
• SEI Blog
Douglas C. Schmidt

Every January on the SEI Blog, we present the 10 most-visited posts of the previous year. This year's list of top 10 is presented in reverse order and features posts published between January 1, 2019, and December 31, 2019.

10. Evaluating Threat-Modeling Methods for Cyber-Physical Systems
9. Managing the Consequences of Technical Debt: 5 Stories from the Field
8. The Vectors of Code: On Machine Learning for Software
7. Business Email Compromise: Operation Wire Wire and New Attack Vectors
6. Six Free Tools for Creating a Cyber Simulator
5. Operation Cloud Hopper Case Study
4. Deep Learning and Satellite Imagery: DIUx Xview Challenge
3. Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra
2. The Promise of Deep Learning on Graphs
1. Why Software Architects Must Be Involved in the Earliest Systems Engineering Activities

10. Evaluating Threat-Modeling Methods for Cyber-Physical Systems
by Nataliya Shevchenko

Addressing cybersecurity for а complex system, especially for а cyber-physical system of systems (CPSoS), requires a strategic approach during the entire lifecycle of the system. Examples of CPSoS include rail transport systems, power plants, and integrated air-defense capability. All these systems consist of large physical, cyber-physical, and cyber-only subsystems with complex dynamics. In the first blog post in this series, I summarized 12 available threat-modeling methods (TMMs). In this post, I will identify criteria for choosing and evaluating a threat-modeling method (TMM) for a CPSoS.

A CPSoS is a system whose components operate and are managed independently. Its components must be able to function fully and independently even when the system of systems is disassembled. These components are typically acquired separately and integrated later, and they may have a physical, cyber, or mixed nature.

The definition of a CPSoS implies a diversity of potential threats that can compromise the integrity of the system, targeting different aspects ranging from purely cyber-related vulnerabilities to the safety of the system as a whole. The traditional approach used to identify threats is to employ one or more TMMs early in the development cycle. Choosing a TMM can be a challenging process by itself. The TMM you choose should be applicable to your system and to the needs of your organization.

CPSoS are connected through one or more cyber networks and run by one or more human operators. The components of those systems are often distributed and are sometimes partially autonomous, with multi-level control and management. Since CPSoSs are safety and/or life critical, threat modeling for these kinds of systems should address the full spectrum of threats: kinetic, physical, cyber-physical, cyber-only, supply chain, and insider threats.

Read the post in its entirety.

9. Managing the Consequences of Technical Debt: 5 Stories from the Field
by Ipek Ozkaya and Rod Nord

If you participate in the development of software, the chances are good that you have experienced the consequences of technical debt, which communicates additional cost and rework over the software lifecycle when a short-term, easy solution is chosen instead of a better solution. Understanding and managing technical debt is an important goal for many organizations. Proactively managing technical debt promises to give organizations the ability to control the cost of change in a way that integrates technical decision making and software economics seamlessly with software engineering delivery. In this post, we provide real-world examples that illustrate the consequences of technical debt for organizations. These examples are excerpted from Chapter 1 of a book we wrote with our colleague Philippe Kruchten, Managing Technical Debt: Reducing Friction in Software Development, which has just been published by Addison Wesley as part of the SEI Series in Software Engineering.

Read the post in its entirety.

8. The Vectors of Code: On Machine Learning for Software
by Zach Kurtz

This blog post provides a light technical introduction on machine learning (ML) for problems of computer code, such as detecting malicious executables or vulnerabilities in source code. Code vectors enable ML practitioners to tackle code problems that were previously approachable only with highly-specialized software engineering knowledge. Conversely, code vectors can help software analysts to leverage general, off-the-shelf ML tools without needing to become ML experts.

In this post, I introduce some use cases for ML for code. I also explain why code vectors are necessary and how to construct them. Finally, I touch on current and future challenges in code vector research at the SEI.

Read the post in its entirety.

7. Business Email Compromise: Operation Wire Wire and New Attack Vectors
by Anne Connell

In June 2018, Federal authorities announced a significant coordinated effort to disrupt business email compromise (BEC) schemes that are designed to intercept and hijack wire transfers from businesses and individuals. Operation Wire Wire, a coordinated law enforcement effort by the U.S. Department of Justice, U.S. Department of Homeland Security, U.S. Department of the Treasury, and the U.S. Postal Inspection Service, was conducted over a six-month period and resulted in 74 arrests in the United States and overseas, including 29 in Nigeria, and three in Canada, Mauritius, and Poland. The operation also resulted in the seizure of nearly $2.4 million and the disruption and recovery of approximately $14 million in fraudulent wire transfers.

In this blog post, I will review the information that can be gleaned from a close examination of Operation Wire Wire and another attack involving a Texas energy company. This post will also offer guidance on how individuals and organizations can protect themselves from these sophisticated new modes of attack.

Read the post in its entirety.

6. Six Free Tools for Creating a Cyber Simulator
by Joseph Mayes

It can be hard for developers of cybersecurity training to create realistic simulations and training exercises when trainees are operating in closed (often classified) environments with no ability to connect to the Internet. To address this challenge, the CERT Workforce Development (CWD) Team recently released a suite of open-source and freely available tools for use in creating realistic Internet simulations for cybersecurity training and other purposes. The tools improve the realism, efficiency, and cost effectiveness of cybersecurity training. In this blog post, I will describe these tools and provide information about how to download, learn more about, and use them.

Since its inception more than 25 years ago, the SEI's CERT Division has been developing and delivering cybersecurity training and exercises on behalf of its sponsors, including the U.S. Department of Defense, the FBI, the National Security Agency (NSA), and other agencies. The purpose of this suite of tools is to aid in the creation of realistic simulations in such environments. In particular, these tools help developers of training scenarios and environments create realistic cyber simulations that can be used in closed environments. The tools also provide trainees with the realistic illusion of being on the Internet without running the risk, for example, of working with live malware that could escape onto the Internet during training.

Read the post in its entirety.

5. Operation Cloud Hopper Case Study
by Nathaniel Richmond

In December, a grand jury indicted members of the APT10 group for a tactical campaign known as Operation Cloud Hopper, a global series of sustained attacks against managed service providers and, subsequently, their clients. These attacks aimed to gain access to sensitive intellectual and customer data. US-CERT noted that a defining characteristic of Operation Cloud Hopper was that upon gaining access to a cloud service provider (CSP) the attackers used the cloud infrastructure to hop from one target to another, gaining access to sensitive data in a wide range of government and industrial entities in healthcare, manufacturing, finance, and biotech in at least a dozen countries . In this blog post, part of an ongoing series of posts on cloud security, I explore the tactics used in Operation Cloud Hopper and whether the attacks could have been prevented by applying the best practices that we outlined for helping organizations keep their data and applications secure in the cloud.

Last year, my colleague Tim Morrow and former colleague Don Faatz published 12 Risks, Threats, & Vulnerabilities in Moving to the Cloud and Best Practices for Cloud Security. This post will use Operation Cloud Hopper as a case study to examine what we got right in these posts, what we got wrong, and what we may have missed. We also hope to demonstrate how abstract concepts associated with cybersecurity can be translated into specific actions and lessons.

Read the post in its entirety.

4. Deep Learning and Satellite Imagery: DIUx Xview Challenge
by Ritwik Gupta

In 2017 and 2018, the United States witnessed a milestone year of climate and weather-related disasters from droughts and wildfires to cyclones and hurricanes. Increasingly, satellites are playing an important role in helping emergency responders assess the damage of a weather event and find victims in its aftermath. Most recently satellites have tracked the devastation wrought by the California wildfires from space. The United States military, which is often the first on the scene of a natural disaster, is increasingly interested in the use of deep learning to automate the identification of victims and structures in satellite imagery to assist with humanitarian assistance and disaster recovery (HADR) efforts.

To that end, the Defense Innovation Unit (DIU) recently launched the xView 2018 Detection Challenge, which was conducted by the Pentagon in partnership with the National Geospatial-Intelligence Agency, to seek out innovative uses of computer vision techniques to more accurately detect images in satellite imagery. As described in this blog post, I worked with a team of researchers in the xView challenge that earned us a fifth-place finish. This activity is our latest effort in using machine learning and artificial intelligence to assist the federal government with HADR efforts.

Read the post in its entirety.


3. Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra
by Jeff Gennari

Object-oriented programs continue to pose many challenges for reverse engineers and malware analysts. C++ classes tend to result in complex arrangements of assembly instructions and sophisticated data structures that are hard to analyze at the machine code level. We've long sought to simplify the process of reverse engineering object-oriented code by creating tools, such as OOAnalyzer, which automatically recovers C++-style classes from executables.

OOAnalyzer includes utilities to import OOAnalyzer results into other reverse engineering frameworks, such as the IDA Pro Disassembler. I'm pleased to announce that we've updated our Pharos Binary Analysis Framework in Github to include a new plugin to import OOAnalyzer analysis into the recently released Ghidra software reverse engineering (SRE) tool suite. In this post, I will explain how to use this new OOAnalyzer Ghidra Plugin to import C++ class information into Ghidra and interpret results in the Ghidra SRE framework.

The Ghidra SRE tool suite was publicly released by the National Security Agency. This framework provides many useful reverse engineering services, including disassembly, function partitioning, decompilation, and various other types of program analyses. Ghidra is open source and designed to be easily extendable via plugins. We have been exploring ways to enhance Ghidra analysis with the Pharos reverse engineering output, and the OOAnalyzer Ghidra Plugin is our first tool to work with Ghidra.

Read the post in its entirety.

2. The Promise of Deep Learning on Graphs
by Oren Wright

A growing number of Department of Defense (DoD) data problems are graph problems: the data from sources such as sensor feeds, web traffic, and supply chains are full of irregular relationships that require graphs to represent explicitly and mathematically. For example, modern test and evaluation produces massive, heterogeneous datasets, and analysts can use graphs to reveal otherwise hidden patterns in these data, affording the DoD a far more complete understanding of a system's effectiveness, survivability, and safety. But such datasets are growing increasingly large and increasingly complex, demanding new approaches for proper analysis. Machine learning seems to recommend itself to such datasets, but conventional machine learning approaches to graph problems are sharply limited.

Deep learning is the current ne plus ultra for big data problems, using brain-inspired algorithms to 'learn' from massive amounts of data and outperform conventional optimization and decision systems. Deep learning, however, hasn't been meaningfully extended beyond Euclidean data. Attempts to generalize deep learning to non-Euclidean domains have been dubbed geometric deep learning by the machine learning community.

Read the post in its entirety.

1. Why Software Architects Must Be Involved in the Earliest Systems Engineering Activities
by Sarah Sheard

Today's major defense systems rely heavily on software-enabled capabilities. However, many defense programs acquiring new systems first determine the physical items to develop, assuming the contractors for those items will provide all needed software for the capability. But software by its nature spans physical items: it provides the inter-system communications that have a direct influence on most capabilities, and thus must be architected intelligently, especially when pieces are built by different contractors. If this architecture step is not done properly, a software-reliant project can be set up to fail from the first architectural decision.

Example: The Global Positioning System (GPS) was divided into ground, user, and satellite segments, and the government issued different contracts for each segment. Many interaction and schedule problems resulted, partly because the segments had different schedules. For example: a satellite was due to launch with new software, so the existing ground segment software was not designed to work with it, but the new ground segment software was not complete.

If, instead, system acquirers ensure that systems engineers address software concerns at the same time as the physical solution is conceptualized, acquirers can opt for a slightly different physical system, whose software architecture is tuned to optimize the provided capabilities.

Read the post in its entirety.

Looking Ahead in 2020

In the coming months, look for posts highlighting our work in quantum computing, machine learning, and system resilience. A new post is published on the SEI Blog every Monday morning.

Additional Resources

Download the latest publications from SEI researchers at our digital library
http://resources.sei.cmu.edu/library/.

We're redesigning the blog—you can help by telling us about your blog experience.
I'll do it No thanks