Category: CERT

This post was co-authored by Nancy Mead.

Cyber threat modeling, the creation of an abstraction of a system to identify possible threats, is a required activity for DoD acquisition. Identifying potential threats to a system, cyber or otherwise, is increasingly important in today's environment. The number of information security incidents reported by federal agencies to the U.S. Computer Emergency Readiness Team (US-CERT) has increased by 1,121 percent from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014, according to a 2015 Government Accountability Office report. Yet, our experience has been that it is often conducted informally with few standards. Consequently, important threat scenarios are often overlooked.

Given the dynamic cyber threat environment in which DoD systems operate, we have embarked on research work aimed at making cyber threat modeling more rigorous, routine, and automated. This blog post evaluates three popular methods of cyber threat modeling and discusses how this evaluation will help develop a model that fuses the best qualities of each.

Network flow plays a vital role in the future of network security and analysis. With more devices connecting to the Internet, networks are larger and faster than ever before. Therefore, capturing and analyzing packet capture data (pcap) on a large network is often prohibitively expensive. Cisco developed NetFlow 20 years ago to reduce the amount of information collected from a communication by aggregating packets with the same IP addresses, transport ports, and protocol (also known as the 5-tuple) into a compact record. This blog post explains why NetFlow is still important in an age in which the common wisdom is that more data is always better. Moreover, NetFlow will become even more important in the next few years as communications become more opaque with the development of new protocols that encrypt payloads by default.

By the close of 2016, "Annual global IP traffic will pass the zettabyte ([ZB]; 1000 exabytes [EB]) threshold and will reach 2.3 ZBs per year by 2020" according to Cisco's Visual Networking Index. The report further states that in the same time frame smartphone traffic will exceed PC traffic. While capturing and evaluating network traffic enables defenders of large-scale organizational networks to generate security alerts and identify intrusions, operators of networks with even comparatively modest size struggle with building a full, comprehensive view of network activity. To make wise security decisions, operators need to understand the mission activity on their network and the threats to that activity (referred to as network situational awareness). This blog post examines two different approaches for analyzing network security using and going beyond network flow data to gain situational awareness to improve security.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today we are highlighting a recent post by Will Dormann, a senior member of the technical staff in the SEI's CERT Division, from the CERT/CC Blog. This post describes a few of the more interesting cases that Dormann has encountered in his work investigating attack vectors for potential vulnerabilities. An attack vector is the method that malicious code uses to propagate itself or infect a computer to deliver a payload or harmful outcome by exploiting system vulnerabilities.

Social engineering involves the manipulation of individuals to get them to unwittingly perform actions that cause harm or increase the probability of causing future harm, which we call "unintentional insider threat." This blog post highlights recent research that aims to add to the body of knowledge about the factors that lead to unintentional insider threat (UIT) and about how organizations in industry and government can protect themselves.

Code clones are implementation patterns transferred from program to program via copy mechanisms including cut-and-paste, copy-and-paste, and code-reuse. As a software engineering practice there has been significant debate about the value of code cloning. In its most basic form, code cloning may involve a codelet (snippets of code) that undergoes various forms of evolution, such as slight modification in response to problems. Software reuse quickens the production cycle for augmented functions and data structures. So, if a programmer copies a codelet from one file into another with slight augmentations, a new clone has been created stemming from a founder codelet. Events like these constitute the provenance or historical record of all events affecting a codelet object. This blog posting describes exploratory research that aims to understand the evolution of source and machine code and, eventually, create a model that can recover relationships between codes, files, or executable formats where the provenance is not known.

As part of our mission to advance the practice of software engineering and cybersecurity through research and technology transition, our work focuses on ensuring that software-reliant systems are developed and operated with predictable and improved quality, schedule, and cost. To achieve this mission, the SEI conducts research and development activities involving the Department of Defense (DoD), federal agencies, industry, and academia. As we look back on 2013, this blog posting highlights our many R&D accomplishments during the past year.

Hacking the CERT FOE

By on in

Occasionally this blog will highlight different posts from the SEI blogosphere. Today we are highlighting a recent post by Will Dormann, a senior member of the technical staff in the SEI's CERT Division, from the CERT/CC Blog. In this post, Dormann describes how to modify the CERT Failure Observation Engine (FOE),when he encounters apps that "don't play well" with the FOE. The FOE is a software testing tool that finds defects in applications running on the Windows platform.

In early 2012, a backdoor Trojan malware named Flame was discovered in the wild. When fully deployed, Flame proved very hard for malware researchers to analyze. In December of that year, Wired magazine reported that before Flame had been unleashed, samples of the malware had been lurking, undiscovered, in repositories for at least two years. As Wired also reported, this was not an isolated event. Every day, major anti-virus companies and research organizations are inundated with new malware samples.

Analyzing Routing Tables

By on in

Occasionally this blog will highlight different posts from the SEI blogosphere. Today we are highlighting a post from the CERT/CC Blog by Timur Snoke, a member of the technical staff in the SEI's CERT Division. This post describes maps that Timur has developed using Border Gateway Protocol (BGP) routing tables to show the evolution of public-facing autonomous system numbers (ASN). These maps help analysts inspect the BPG routing tables to reveal disruptions to an organization's infrastructure. They also help analysts glean geopolitical information for an organization, country, or a city-state, which helps them identify how and when network traffic is subverted to travel nefarious alternative paths to place communications deliberately at risk.

Risk inherent in any military, government, or industry network system cannot be completely eliminated, but it can be reduced by implementing certain network controls. These controls include administrative, management, technical, or legal methods. Decisions about what controls to implement often rely on computed-risk models that mathematically calculate the amount of risk inherent in a given network configuration. These computed-risk models, however, may not calculate risk levels that human decision makers actually perceive.

Knowing what assets are on a network, particularly which assets are visible to outsiders, is an important step in achieving network situational awareness. This awareness is particularly important for large, enterprise-class networks, such as those of telephone, mobile, and internet providers. These providers find it hard to track hosts, servers, data sets, and other vulnerable assets in the network.

In previous blog posts, I have written about applying similarity measures to malicious code to identify related files and reduce analysis expense. Another way to observe similarity in malicious code is to leverage analyst insights by identifying files that possess some property in common with a particular file of interest. One way to do this is by using YARA, an open-source project that helps researchers identify and classify malware. YARA has gained enormous popularity in recent years as a way for malware researchers and network defenders to communicate their knowledge about malicious files, from identifiers for specific families to signatures capturing common tools, techniques, and procedures (TTPs). This blog post provides guidelines for using YARA effectively, focusing on selection of objective criteria derived from malware, the type of criteria most useful in identifying related malware (including strings, resources, and functions), and guidelines for creating YARA signatures using these criteria.

By analyzing vulnerability reports for the C, C++, Perl, and Java programming languages, the CERT Secure Coding Team observed that a relatively small number of programming errors leads to most vulnerabilities. Our research focuses on identifying insecure coding practices and developing secure alternatives that software programmers can use to reduce or eliminate vulnerabilities before software is deployed. In a previous post, I described our work to identify vulnerabilities that informed the revision of the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standard for the C programming language. The CERT Secure Coding Team has also been working on the CERT C Secure Coding Standard, which contains a set of rules and guidelines to help developers code securely. This posting describes our latest set of rules and recommendations, which aims to help developers avoid undefined and/or unexpected behavior in deployed code.

Since 2001, researchers at the CERT Insider Threat Center have documented malicious insider activity by examining media reports and court transcripts and conducting interviews with the United States Secret Service, victims' organizations, and convicted felons. Among the more than 700 insider threat cases that we've documented, our analysis has identified more than 100 categories of weaknesses in systems, processes, people or technologies that allowed insider threats to occur. One aspect of our research has focused on identifying enterprise architecture patterns that protect organization systems from malicious insider threat.

As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in workforce competency and readiness, cyber forensics, exploratory research, acquisition, and software-reliant systems. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

As security specialists, we are often asked to audit software and provide expertise on secure coding practices. Our research and efforts have produced several coding standards specifically dealing with security in popular programming languages, such as C, Java, and C++. This posting describes our work on the CERT Perl Secure Coding Standard, which provides a core of well-documented and enforceable coding rules and recommendations for Perl, which is a popular scripting language.

Happy Memorial Day. As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in architecture analysis, patterns for insider threat monitoring, source code analysis and insider threat security reference architecture. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

According to the 2011 CyberSecurity Watch Survey, approximately 21 percent of cyber crimes against organizations are committed by insiders. Of the 607 organizations participating in the survey, 46 percent stated that the damage caused by insiders was more significant than the damage caused by outsiders. Over the past 11 years, CERT Insider Threat researchers have collected incidents related to malicious activity by insiders obtained from a number of sources, including media reports, the courts, the United States Secret Service, victim organizations, and interviews with convicted felons.

Malware, which is short for "malicious software," is a growing problem for government and commercial organizations since it disrupts or denies important operations, gathers private information without consent, gains unauthorized access to system resources, and other inappropriate behaviors. A previous blog postdescribed the use of "fuzzy hashing" to determine whether two files suspected of being malware are similar, which helps analysts potentially save time by identifying opportunities to leverage previous analysis of malware when confronted with a new attack. This posting continues our coverage of fuzzy hashing by discussing types of malware against which similarity measures of any kind (including fuzzy hashing) may be applied.

The SEI has devoted extensive time and effort to defining meaningful metrics and measures for software quality, software security, information security, and continuity of operations. The ability of organizations to measure and track the impact of changes--as well as changes in trends over time--are important tools to effectively manage operational resilience, which is the measure of an organization's ability to perform its mission in the presence of operational stress and disruption. For any organization--whether Department of Defense (DoD), federal civilian agencies, or industry--the ability to protect and sustain essential assets and services is critical and can help ensure a return to normalcy when the disruption or stress is eliminated. This blog posting describes our research to help organizational leaders manage critical services in the presence of disruption by presenting objectives and strategic measures for operational resilience, as well as tools to help them select and define those measures.

Malware--generically defined as software designed to access a computer system without the owner's informed consent--is a growing problem for government and commercial organizations. In recent years, research into malware focused on similarity metrics to decide whether two suspected malicious files are similar to one another. Analysts use these metrics to determine whether a suspected malicious file bears any resemblance to already verified malicious files. Using these metrics allows analysts to potentially save time, by identifying opportunities to leverage previous analysis. This post will describe our efforts to develop a technique (known as fuzzy hashing) to help analysts determine whether two pieces of suspected malware are similar.