search menu icon-carat-right cmu-wordmark

Bitcoin, Blockchain, Machine Learning, and Ransomware: The Top 10 Posts of 2017

Each year since the blog's inception, we present the 10 most-visited posts of the year in descending order ending with the most popular post. In this blog post, we present the 10 most popular posts published between January 1, 2017 and December 31, 2017.

10. Army Robotics in the Military 9. Automated Code Repair in the C Programming Language 8. What is Bitcoin? What is Blockchain? 7. Pharos Binary Static Analysis Tools Released on GitHub
6. Best Practices for Network Border Protection
5. Five Perspectives on Scaling Agile 4. Best Practices for NTP Services
3. Six Best Practices for Securing a Robust Domain Name System (DNS) Infrastructure 2. Machine Learning in Cybersecurity 1. Ransomware: Best Practices for Prevention and Response

Army Robotics in the Military
by Jonathan Chu on June 12, 2017

The future of autonomy in the military could include unmanned cargo delivery; micro-autonomous air/ground systems to enhance platoon, squad, and soldier situational awareness; and manned and unmanned teaming in both air and ground maneuvers, according to a 2016 presentation by Robert Sadowski, chief roboticist for the U.S. Army Tank Automotive Research Development and Engineering Center (TARDEC), which researches and develops advanced technologies for ground systems. One day, robot medics may even carry wounded soldiers out of battle. The system behind these feats is ROS-M, the militarized version of the Robot Operating System (ROS), an open-source set of software libraries and tools for building robot applications. This post describes the work of SEI researchers to create an environment within ROS-M for developing unmanned systems that spurs innovation and reduces development time.

ROS-M aims to add software and hardware simulation tools, cyber assurance checking, a code repository, and a training environment for warfighters to the upcoming ROS 2.0. Army robotics and autonomous systems are key themes in TARDEC's strategy and in the U.S. Army Operating Concept.

Read the complete post.

Automated Code Repair in the C Programming Language
by Will Klieber and Will Snavely on January 16, 2017

Finding violations of secure coding guidelines in source code is daunting, but fixing them is an even greater challenge. We are creating automated tools for source code transformation. Experience in examining software bugs reveals that many security-relevant bugs follow common patterns (which can be automatically detected) and that there are corresponding patterns for repair (which can be performed by automatic program transformation). For example, integer overflow in calculations related to array bounds or indices is almost always a bug. While static analysis tools can help, they typically produce an enormous number of warnings. Once an issue has been identified, teams are only able to eliminate a small percentage of the vulnerabilities identified. As a result, code bases often contain an unknown number of security bug vulnerabilities. This blog post describes the SEI's research in automated code repair, which can eliminate security vulnerabilities much faster than the existing manual process and at a much lower cost. While this research focuses to the C programming language, it applies to other languages as well.

Based on our experience with the DoD source code analysis labs, we know that most software contains many vulnerabilities. Most are caused by simple coding errors. Static analysis tools, typically used late in the development process, produce a huge number of diagnostics. Even after excluding false positives, the volume of true positives can overwhelm the abilities of development teams to fix the code. Consequently, the team eliminates only a small percentage of the vulnerabilities. The existing installed codebases in the DoD now consist of billions of lines of C code that contain an unknown number of security vulnerabilities.

Read the complete post.

What Is Bitcoin? What Is Blockchain?
by Eliezer Kanal on July 24, 2017

Blockchain technology was conceived a little over 10 years ago. In that short time, it went from being the foundation for a relatively unknown alternative currency to being the "next big thing" in computing, with industries from banking to insurance to defense to government investing billions of dollars in blockchain research and development. This blog post, the first of two posts about the SEI's exploration of DoD applications for blockchain, provides an introduction to this rapidly emerging technology.

At its most basic, a blockchain is simply a distributed ledger that tracks transactions among parties. What makes it interesting are its fundamental properties, which apply to every single transaction:

  • All parties agree that the transaction occurred.
  • All parties agree on the identities of the individuals participating in the transaction.
  • All parties agree on the time of the transaction.
  • The details of the transaction are easy to review and not subject to dispute.
  • Evidence of the transaction persists, immutably, over time.

This combination of properties results in a system that, by design, timestamps and records all transactions in a secure and permanent manner, and is easily auditable in the future. In addition to the above, due to its distributed nature, the system is highly resilient against downtime. All these properties combined make an appealing infrastructure for a wide variety of applications, and indeed explains much of the interest in blockchain technologies.

Before describing blockchains in general terms, this post describes one of the simplest and best-known implementations in use today: the cryptocurrency Bitcoin. The blockchain in Bitcoin literally acts a ledger; it keeps track of the balances for all users and updates them as cryptocurrency changes hands.

The Bitcoin application allows for two types of users, whom we will refer to as participants and miners. Participants are individuals who want to use Bitcoin as a currency, sending and receiving Bitcoins in exchange for goods and services. These users apply the Bitcoin software to create a "wallet" from which they can send and receive Bitcoins to other participants. The transactions--literally just a message sent to the Bitcoin network broadcasting that this user gave a specific number of Bitcoins to that user--indicate to users that the ledger should be updated.

Read the complete post.

Pharos Binary Static Analysis Tools Released on GitHub
by Jeff Gennari on August 28, 2017

The Pharos Pharos Binary Analysis Framework and tools to support reverse engineering of binaries is a CERT-created framework that builds upon the ROSE compiler infrastructure developed by Lawrence Livermore National Laboratory for disassembly, control flow analysis, instruction semantics, and more. Pharos uses these features to automate common reverse engineering tasks. We have recently updated our framework on GitHub to include many new tools, improvements, and bug fixes. This post focuses on the tool-specific changes.

OOAnalyzer (based on one of the earliest Pharos analysis tools, ObjDigger) recovers C++-style classes from executables. In this release, we've changed the fundamental way that we analyze object-oriented data structures. The OOAnalyzer methodology for identifying classes and class features is now based on constraint solving with XSB Prolog. We shifted to this methodology because the ObjDigger's non-declarative approach would often reach a conclusion about class relationships based on incomplete information. The crux of the problem was that early speculative decisions in ObjDigger's analysis affected subsequent choices with no way to revisit or revise. What we needed was a way to refine results as more information became available; Prolog meets this need. Rather than making decisions sequentially, OOAnalyzer accumulates context-free facts that are exported to Prolog for higher-level semantic analysis. When a line of reasoning doesn't work out, Prolog backtracks and searches for a different solution. This approach is covered in a future post in more depth, including our reasons (such as performance) for selecting XSB Prolog.

Read the complete post.

Best Practices for Network Border Protection
by Rachel Kartch on May 15, 2017

When it comes to network traffic, it's important to establish a filtering process that identifies and blocks potential cyberattacks, such as worms spreading ransomware and intruders exploiting vulnerabilities, while permitting the flow of legitimate traffic. This post, the latest in a series on best practices for network security, explores best practices for network border protection at the Internet router and firewall.

The philosophy of many network administrators is let your routers route. In a larger network with specialized devices, you want to let the Internet router do its primary job, which is to pass traffic. Having said that, there are exceptions where traffic/security issues will require some basic blocking on the Internet router.

Read the complete post.

Five Perspectives on Scaling Agile
by Will Hayes on February 20, 2017

The prevalence of Agile methods in the software industry today is obvious. All major defense contractors in the market can tell you about their approaches to implementing the values and principles found in the Agile Manifesto. Published frameworks and methodologies are rapidly maturing, and a wave of associated terminology is part of the modern lexicon. We are seeing consultants feuding on Internet forums as well, with each one claiming to have the "true" answer for what Agile is and how to make it work in your organization. The challenge now is to scale Agile to work in complex settings with larger teams, larger systems, longer timelines, diverse operating environments, and multiple engineering disciplines. We recently explored the issues surrounding scaling Agile within the Department of Defense (DoD). This blog post, an excerpt of our recently published technical note Scaling Agile Methods for Department of Defense Programs, presents five perspectives on scaling Agile from leading thinkers in the field including Scott Ambler, Steve Messenger, Craig Larman, Jeff Sutherland, and Dean Leffingwell.

Many people understand Agile concepts through the illustrations offered by widely adopted methods such as Scrum. These team-focused development processes embody patterns of Agile behavior and offer concrete implementation examples. If you want to achieve success with Agile methods in large-scale development efforts, you might be tempted to view the challenge as simply a matter of tailoring Scrum to work with larger groups of people. What we are learning from the experiences of major DoD programs is that this view oversimplifies the real work to do.

This blog posting present attributes that we have found significant in successfully applying Agile methods in DoD programs. These attributes deserve attention as organizations architect the way their programs will implement Agile processes

Read the complete post.

Best Practices for NTP Services
by Timur Snoke on April 3, 2017

Computer scientist David L. Mills created NTP in the early 1980s to synchronize computer clocks to a standard time reference. Since the inception of NTP, a group of volunteers with the NTP pool project have maintained a large, publicly available "virtual cluster of timeservers providing reliable easy to use NTP service for millions of clients" around the world for many Linux distributions and network appliances.

As detailed at, NTP works in a hierarchical fashion by passing time from one stratum to another. For example, Stratum 0 serves as a reference clock and is the most accurate and highest precision time server (e.g., atomic clocks, GPS clocks, and radio clocks.) Stratum 1 servers take their time from Stratum 0 servers and so on up to Stratum 15; Stratum 16 clocks are not synchronized to any source. The time on a client is established through an exchange of packets with one or more stratum servers. These packets put timestamps on each message and the time taken to transmit the messages is a factor in the algorithm for establishing the consensus for time that should be on the client.

NTP can provide an accurate time source through consensus with multiple input servers. It can also identify which available time servers are inaccurate. One challenge is that NTP was built during a time when the Internet community was friendlier. During NTP's inception, NTP servers weren't concerned with user verification. In the early 1980s, it was common for NTP servers to be publicly available as a resource developers could use to troubleshoot and confirm their own NTP solution.

Read the complete post.

Six Best Practices for Securing a Robust Domain Name System (DNS) Infrastructure
by Mark Langston on February 6, 2017

The Domain Name System (DNS) is an essential component of the Internet, a virtual phone book of names and numbers, but we rarely think about it until something goes wrong. As evidenced by the recent distributed denial of service (DDoS) attack against Internet performance management company Dyn, which temporarily wiped out access to websites including Amazon, Paypal, Reddit, and the New York Times for millions of users down the Eastern Seaboard and Europe, DNS serves as the foundation for the security and operation of internal and external network applications. DNS also serves as the backbone for other services critical to organizations including email, external web access, file sharing and voice over IP (VoIP). There are steps, however, that network administrators can take to ensure the security and resilience of their DNS infrastructure and avoid security pitfalls. this blog post outlines six best practices to design a secure, reliable infrastructure and present an example of a resilient organizational DNS.

Researchers on CERT's Network Situational Awareness Team (see the recent post Distributed Denial of Service Attacks: Four Best Practices for Prevention and Response) often work with DNS data as a means to enhance analysts' ability to rapidly respond to emerging threats and discover hidden dangers in their infrastructure.

As organizations migrate, in whole or in part, to cloud-based services, traditional on-site computing infrastructure is relocated to remote sites typically under control of a third party. Too often in the wake of an attack, the victim organization finds itself unable to reliably resolve the information needed to access that remote infrastructure, leaving it unable to conduct business.

Read the complete post.

Machine Learning in Cybersecurity
by Eliezer Kanal on June 5, 2017

The year 2016 witnessed advancements in artificial intelligence in self-driving cars, language translation, and big data. That same time period, however, also witnessed the rise of ransomware, botnets, and attack vectors as popular forms of malware attack, with cybercriminals continually expanding their methods of attack (e.g., attached scripts to phishing emails and randomization), according to Malware Byte's State of Malware report. To complement the skills and capacities of human analysts, organizations are turning to machine learning (ML) in hopes of providing a more forceful deterrent. ABI Research forecasts that "machine learning in cybersecurity will boost big data, intelligence, and analytics spending to $96 billion by 2021." At the SEI, machine learning has played a critical role across several technologies and practices that we have developed to reduce the opportunity for and limit the damage of cyber attacks. This post--the first in a series highlighting the application of machine learning across several research projects--introduces the concept of machine learning, explain how machine learning is applied in practice, and touch on its application to cybersecurity throughout the article.

Machine learning refers to systems that are able to automatically improve with experience. Traditionally, no matter how many times you use software to perform the same exact task, the software won't get any smarter. Always launch your browser and visit the same exact website? A traditional browser won't learn that it should probably just bring you there by itself when first launched. With ML, software can gain the ability to learn from previous observations to make inferences about both future behavior, as well as guess what you want to do in new scenarios. From thermostats that optimize heating to your daily schedule, autonomous vehicles that customize your ride to your location, and advertising agencies seeking to keep ads relevant to individual users, ML has found a niche in all aspects of our daily life.

Read the complete post.

Ransomware: Best Practices for Prevention and Response
by Alexander Volynkin, Jose Morales, and Angela Horneman on May 31, 2017

On May 12, 2017, in the course of a day, the WannaCry ransomware attack infected nearly a quarter million computers. WannaCry is the latest in a growing number of ransomware attacks where, instead of stealing data, cyber criminals hold data hostage and demand a ransom payment. WannaCry was perhaps the largest ransomware attack to date, taking over a wide swath of global computers from FedEx in the United States to the systems that power Britain's healthcare system to systems across Asia, according to the New York Times. This post spells out several best practices for prevention and response to a ransomware attack.

Ransomware, in its most basic form, is self-explanatory. Data is captured, encrypted, and held for ransom until a fee is paid. The two most common forms of ransomware delivery are through email and websites.

Although ransomware has been around in some form or another for decades--the first known attack is believed to have occurred in 1989--it has more recently become the modus operandi of cyber criminals across the globe. Ransomware has been continuously evolving in the past decade, in part due to advances in cryptography. The wide availability of advanced encryption algorithms including RSA and AES ciphers made ransomware more robust. While estimates vary, the number of ransomware attacks continues to rise. The Verizon 2017 Data Breach Investigations Report estimates that (pre-WannaCry) ransomware attacks around the world grew by 50 percent in the last year. Symantec, in a separate report, estimated that the average amount paid by victims had risen to $1,077.

Read the complete post.

Additional Resources

Download the latest publications from SEI researchers at our digital library

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed