search menu icon-carat-right cmu-wordmark

Network Traffic Analysis with SiLK: Profiling and Investigating Cyber Threats

PUBLISHED IN
CITE

Tim Shimeall and Nancy Ott co-authored this post.

Cyber threats are on the rise, making it vitally important to understand what's happening on our computer networks. But the massive amount of network traffic makes this job hard. How can we find evidence of unusual, potentially hostile activity in this deluge of network data?

One way is to use SiLK (System for Internet Level Knowledge), a highly-scalable tool suite for capturing and analyzing network flow data. SiLK's data retrieval and analysis tools enable us to spot trends and anomalies that could indicate unfriendly activity. The publication Network Traffic Analysis with SiLK (also known as the Analyst's Handbook) describes analysis workflows and methodologies for investigating network behavior with SiLK.

This post, the latest in a series of posts detailing updates to Network Traffic Analysis with SiLK, highlights recent updates to both the SiLK analysis suite and the Analyst's Handbook, including a new chapter on improving the query and analysis performance of SiLK scripts. As my colleague Geoff Sanders wrote in a previous post, a 2018 revision shifted the focus from individual tools in the SiLK tool suite to the perspective of the network traffic analyst.

The 2019 edition of Network Traffic Analysis with SiLK continues this emphasis on the tradecraft of network traffic analysis. It presents the SiLK tools in the role of executing the analysis and describes three ways to characterize netflow analysis: Profiling, Reacting and Exploring.

  • Profiling paints a bigger picture of a network's behavior. From displaying network traffic types and volume on a network to a description of what a particular server receives and transmits on the network, netflow provides the data to process into information.

  • Reacting is often done in cyber security incident response. After an incident is reported, examination of the network traffic associated with it provides valuable context for the incident responder to use in resolving the issue.

  • Exploring is more free form since the goal is often less well defined. Profiling and Reacting typically use well-known analytics to gather and process network data, resulting in a structured report. Exploring, in contrast, involves looking for the unknown. The analyst may start with a single indicator from an event, using it as a starting point to search for additional information. The analyst blazes a trail through the jungle of billions of netflow records to find the goal.

Network Traffic Analysis with SiLK is organized according to the workflow that we recommend analysts follow to investigate network activity and anomalies. As the rest of this post will show, these updates are focused on the tradecraft of network traffic analysts with the tool set playing the supporting role.

Tuning SiLK for Improved Performance

SiLK users often seek advice for how to make their analytics run faster. This latest edition of Network Traffic Analysis with SiLK adds a much-requested chapter on improving the performance of SiLK queries. Chapter 9, Tuning SiLK for Improved Performance, discusses techniques to improve the efficiency and performance of SiLK when working with very large datasets. It gives examples of strategies that analysts can use to speed up the execution of their SiLK analytics and measures the resulting performance improvements. This capability supports more responsive and agile analyses of network events.

We describe the following strategies for improving SiLK performance. All were tested on the sample dataset used in Network Traffic Analysis with SiLK; your results may vary.

  • Spreading the load across processors. SiLK data pulls with the rwfilter command can be parallelized by processor. Instead of making one call to the rwfilter command, make parallel rwfilter calls to concurrently pull flow records by data type or time interval. The operating system allocates each concurrent call to a different processor. Use the Linux wait command to ensure that all parallel calls complete before processing proceeds. Type dropping and parallelism sped up execution by approximately 68 percent on our query of the sample dataset, while parallelizing by time interval sped up execution by approximately 50 percent to 70 percent.
  • Parallelizing processes via threads. The rwfilter command can run in lightweight, parallel threads with the --threads parameter. This parameter parallelizes the input, parsing, and filtering of flow records. Speed improvements for our rwfilter query of the sample dataset ranged from approximately 50 percent to 75 percent (depending on the number of threads), but performance gains flattened out as the number of concurrent processes grew.
  • Using coarse parallelization. Very large flow record files take a long time to run in an analytic. To speed up the analytic's execution, first split these large files into pieces with the rwsplit command. Then, concurrently run rwuniq or other SiLK commands on the split files. For our query of the sample dataset, this strategy boosted performance by about 28 percent. .
  • Combining results from concurrent or parallelized data pulls. The SiLK rwuniq, rwstats, and rwcount commands accept input from multiple source files. Use these commands after making concurrent or parallelized rwfilter calls to combine the resulting data files for analysis. Running rwuniq on previously retrieved flow record files from the sample dataset was 86 percent faster than retrieving them from the repository.
  • Forming more efficient queries. Analysts can apply their knowledge of network traffic characteristics to make SiLK data pulls more precise--and therefore more efficient. To reduce the amount of data retrieved from the SiLK repository, build rwfilter calls that only retrieve network flow records with the desired traffic types, network protocols, start times and durations, IP addresses, and other flow attributes. If IPv6 addresses are not needed, limit the rwfilter command to IPv4 addresses. These techniques result in smaller files that only contain records that are necessary for the analysis, speeding up both query and analysis times. In our query of the sample dataset, avoiding multiple rwfilter calls improved speed by 64 percent to 68 percent, while more precisely defining queries improved speed by roughly 60 percent.
  • Pipelining. Unix pipes can store intermediate SiLK results in memory, avoiding the time-consuming process of writing them to disk. Use pipes to form a tool chain where the output of one SiLK call is piped directly as input to the next SiLK call in the analytic. Our sample pipelined tool chain saw a performance gain of approximately 69 percent over one that wrote intermediate files to disk at every step.
  • Using local temporary files. SiLK commands like rwsort, rwuniq, and rwstats create temporary files while they are executing. Set this temporary file location (generally /tmp, the system temporary partition) to a local directory to avoid the overhead of saving these files to a remote network location. Performance gains will vary depending on your network configuration.

Information Exposure and Data Exfiltration

We listened to our user community and realized the need for additional case studies about common cyber threats. A new case study in Chapter 7 of our revised Analyst's Handbook discusses how to perform an exploratory analysis that investigates information exposure by examining use of the Internet Control Message Protocol (ICMP). Specifically, it will show how to identify both uncommon ICMP types and common ICMP types that are being used in an unusual way. This capability will help network security analysts to determine whether network data is being inadvertently exposed.

Each level of an exploratory analysis case is posed as a question. The case studies build upon the answers to these questions to investigate unusual network traffic and revealing changes of network behavior.

The steps presented in this example follow a logical progression from conceptual to final product.

  1. Prepare a model of the network. Here we have the advantage of working with a data set and map of the network. In real life the analyst may have to work with what is provided to build the map and then the model.
  2. Build a Port-Protocol Prefix Map. Here the Prefix Map (pmap) is used very effectively to reduce coding and improve documentation of results.
  3. Pull the necessary flow records. Here we demonstrate using the various techniques for efficiently reading the records we want from the repository.
  4. Find anomalies. Here we use the pmap to display the ICMP type and code textual description. From this we can identify anomalies.
  5. Explore Attributes. Here we use the history of ICMP standards to separate out valid, obsolete, uncommon and unregistered ICMP type/code combinations. From this we can find traffic that merely violates policy (e.g., who can respond to a ping) as well as traffic that might be exfiltrating data through an ICMP covert channel.

Aggregate Bags

Aggregate bags (also known as aggbags), an extension of SiLK bags, are the most complex in a progression of SiLK-related data structures. Like bags, aggregate bags store key-value pairs. However, bags only store simple key-value pairs, each of which represents a single field or count. Aggregate bags store composite key-value pairs: both the key and value can be composed of one or more fields.

An aggregate bag stores the results of complex operations in a single binary file, as opposed to multiple bag files. This structure makes SiLK analytics simpler and more efficient. For example, you can store a composite of IP address and protocol as the key in a single aggregate bag. The value could be counts of records and bytes for each IP/Protocol combination found in the provided flow records. Without aggregate bags, you would need to create multiple bag files to store these composite values, making your analytic more complicated to implement.

Network Traffic Analysis with SiLK now describes how to use the following aggregate bag commands:

  • rwaggbag. Reads SiLK flow records and builds a binary Aggregate Bag containing composite key-count pairs.
  • rwaggbagbuild. Creates a binary Aggregate Bag file from text input.
  • rwaggbagcat. Prints binary Aggregate Bag files as text.
  • rwaggbagtool. Performs operations (e.g., addition, subtraction) on binary Aggregate Bag files and produces a new Aggregate Bag file.

These commands are supported in SiLK version 3.15 and later.

Wrapping Up and Looking Ahead

Our aim with this work is to help analysts better understand behavior on their networks and identify anomalies. Updates to Network Traffic Analysis with SiLK are the result primarily of our continued engagement with the user community at FloCon and other venues.

The updated text and the decision to present the information from an analyst perspective were the result of user feedback at recent FloCon conferences and other venues. We will also be looking for feedback at the 2020 FloCon conference. Users can send suggestions for updates anytime to netsa-mentor@cert.org.

Additional Resources

The latest Open Source version of SiLK and selected previous releases are available from http://tools.netsa.cert.org/silk/download.html.

Other tools developed by CERT's NetSA group include the following:

  • Yet Another Flow Sensor (YAF) processes packet data into bidirectional flow records that can be used as input to an IPFIX Collecting Process. YAF's output can be used with super_mediator, Pipeline 5, and the SiLK tools.
  • Analysis Pipeline 5.8 is a streaming analysis tool than can process more than just SiLK flows as done in version 4.x. It can now process YAF records and raw IPFIX records. It can do all of the analyses available in version 4.x. A notable enhancement is expansive DNS record processing. This includes fast flux detection and domain name watchlisting.
  • super_mediator is an IPFIX mediator for use with the YAF and SiLK tools. It collects and filters YAF output data to various IPFIX collecting processes and/or csv files. super_mediator can be configured to perform de-duplication of DNS resource records as exported by YAF.

SiLK Tool Suite Quick Reference Guide (QRG)

The SiLK tools have been pre-built and packaged for various versions of Fedora and CentOS/RHEL. These packages are available at the CERT Linux Forensics Tools Repository (LiFTeR) which can be found at http://www.cert.org/forensics/repository. Consult http://www.cert.org/forensics/repository/ByPackage/silk.html for the latest versions for the latest version of SiLK provided in LiFTeR.

Read Tim Shimeall's post, Traffic Analysis for Network Security: Two Approaches for Going Beyond Network Flow Data.

Read the SEI Blog Post Best Practices in Network Traffic Analysis: Three Perspectives by Angela Hornemann, Tim Shimeall, and Timur Snoke.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed