Archive: 2018-01

In a previous post, I discussed the Pharos Binary Analysis Framework and tools to support reverse engineering of binaries with a focus on malicious code analysis. Recall that Pharos is a framework created by our CERT team that builds upon the ROSE compiler infrastructure developed by Lawrence Livermore National Laboratory. ROSE provides a number of facilities for binary analysis including disassembly, control flow analysis, instruction semantics, and more. Pharos uses these features to automate common reverse engineering tasks.

Since our last post, we have developed new techniques and tools in the Pharos framework to solve a problem that may be of interest to reverse engineers and malware analysts. Specifically, this post describes how we have worked on techniques and tools to identify the specific program inputs and environments that are needed to reach an execution of interest to an analyst, which we call path finding.

Almost all software systems today face a variety of threats, and the number of threats grows as technology changes. Malware that exploits software vulnerabilities grew 151 percent in the second quarter of 2018, and cyber-crime damage costs are estimated to reach $6 trillion annually by 2021. Threats can come from outside or within organizations, and they can have devastating consequences. Attacks can disable systems entirely or lead to the leaking of sensitive information, which would diminish consumer trust in the system provider. To prevent threats from taking advantage of system flaws, administrators can use threat-modeling methods to inform defensive measures. In this blog post, I summarize 12 available threat-modeling methods.

Today, organizations build applications on top of existing platforms, frameworks, components, and tools; no one constructs software from scratch. Hence today's software development paradigm challenges developers to build trusted systems that include increasing numbers of largely untrusted components. Bad decisions are easy to make and have significant long-term consequences. For example, decisions based on outdated knowledge or documentation, or skewed to one criterion (such as performance) may lead to substantial quality problems, security risks, and technical debt over the life of the project. But there is typically a tradeoff between decision-making speed and confidence. Confidence increases with more experience, testing, analysis, and prototyping, but these all are costly and time consuming. In this blog post, I describe research that aims to increase both speed and confidence by applying existing automated-analysis techniques and tools (e.g., code and software-project repository analyses) mapping extracted information to common quality indicators from DoD projects.

Statistics and machine learning often use different terminology for similar concepts. I recently confronted this when I began reading about maximum causal entropy as part of a project on inverse reinforcement learning. Many of the terms were unfamiliar to me, but as I read closer, I realized that the concepts had close relationships with statistics concepts. This blog post presents a table of connections between terms that are standard in statistics and their related counterparts in machine learning.

Earlier this year, a team of researchers from the SEI CERT Division's Network Situational Awareness Team (CERT NetSA) released an update (3.17.0) to the System for Internet-Level Knowledge (SiLK) traffic analysis suite, which supports the efficient collection, storage, and analysis of network flow data, enabling network security analysts to query large historical traffic data sets rapidly and scalably. As this post describes, our team also recently updated the Network Traffic Analysis with SiLK handbook to make it more analyst-focused and teach not only the toolset but also the tradecraft around using it.

Software developers are increasingly pressured to rapidly deliver cutting-edge software at an affordable cost. An increasingly important software attribute is security, meaning that the software must be resistant to malicious attacks. Software becomes vulnerable when one or more weaknesses can be exploited by an attacker to cause to modify or access data, interrupt proper execution, or perform incorrect actions.

This post was co-authored by Robert Nord.

Technical debt communicates the tradeoff between the short-term benefits of rapid delivery and the long-term value of developing a software system that is easy to evolve, modify, repair, and sustain. Like financial debt, technical debt can be a burden or an investment. It can be a burden when it is taken on unintentionally without a solid plan to manage it; it can also be part of an intentional investment strategy that speeds up development, as long as there is a plan to pay back the debt before the interest swamps the principal.

If you're considering migrating to IPv6, you may be asking, Am I ready? That's a good question to ask, but you also have to ask, Is my ISP ready? If your Internet service provider (ISP) isn't ready for an IPv6 migration, you may have external web sites that won't load, problems receiving email, and many other issues. This post is the latest in a series examining issues, challenges, and best practices when transitioning from IPv4 to IPv6, whether at the enterprise level, the organizational level, or the home-user level. In this post, I present some points to help you know if your ISP can support your IPv6 ambitions.

This post is also co-authored by Douglas C. Schmidt and William Scherlis.

In its effort to increase the capability of the warfighter, the Department of Defense (DoD) has made incremental changes in its acquisition practices for building and deploying military capacity. This capacity can be viewed as "platforms" (tanks, ships, aircraft, etc.) and the mission system "payloads" (sensors, command and control, weapons, etc.) that are populated onto those platforms to deliver the desired capability. This blog post, the first in a series excerpted from a recently published paper, explores opportunities in modularity and open systems architectures with the aim of helping the DoD deliver higher quality software to the warfighter with far greater innovation in less time.

This post is also authored by Tim Shimeall and Timur Snoke.

In July of this year, a major overseas shipping company had its U.S. operations disrupted by a ransomware attack, one of the latest attacks to disrupt the daily operation of a major, multi-national organization. Computer networks are complex, often tightly coupled systems; operators of such systems need to maintain awareness of the system status or disruptions will occur. In today's operational climate, threats and attacks against network infrastructures have become far too common. At the SEI's CERT Division Situational Awareness team, we work with organizations and large enterprises, many of whom analyze their network traffic data for ongoing status, attacks, or potential attacks. Through this work we have observed both challenges and best practices as these network traffic analysts analyze incoming contacts to the network including packets traces or flows. In this post, the latest in our series highlighting best practices in network security, we present common questions and answers that we have encountered about the challenges and best practices in analysis of network traffic, including packets, traces, or flows.

A software product line is a collection of related products with shared software artifacts and engineering services that has been developed by a single organization intended to serve different missions and different customers. In industry, product lines provide both customer benefits (such as functionality, quality, and cost) and development organization benefits (such as time to market and price-margin). Moreover, these benefits last through multiple generations of products. This blog is the first in a series of three posts on sustaining product lines in terms of required decisions and potential benefits of proposed approaches. In this post, I identify the potential benefits of a product line and discuss contracting issues in the context of Department of Defense (DoD) programs.

Experience shows that most software contains code flaws that can lead to vulnerabilities. Static analysis tools used to identify potential vulnerabilities in source code produce a large number of alerts with high false-positive rates that an engineer must painstakingly examine to find legitimate flaws. As described in this blog post, we in the SEI's CERT Division have developed the SCALe (Source Code Analysis Laboratory) tool, as we have researched and prototyped methods to help analysts be more efficient and effective at auditing static analysis alerts. In August 2018 we released a version of SCALe to the public (open-source via Github).

As Soon as Possible

In the first post in this series, I introduced the concept of the Minimum Viable Capability (MVC). While the intent of the Minimum Viable Product (MVP) strategy is to focus on rapidly developing and validating only essential product features, MVC adapts this strategy to systems that are too large, too complex, or too critical for MVP.

MVC is a scalable approach to validating a system of capabilities, each at the earliest possible time. Capability scope is limited (minimum) so that it may be produced as soon as possible. For MVP, as soon as possible is often a just a few weeks. But what does as soon as possible mean for an MVC? This post explores how technical dependencies and testability determine that, and what this implies for a system roadmap. Let's start with the pattern of MVC activities to produce a major release.

At the 2018 World Economic Forum, global leaders voiced concerns about the growing trend of cyberattacks targeting critical infrastructure and strategic industrial sectors, citing fears of a worst-case scenario that could lead to a breakdown of the systems that keep societies functioning. A painful example was the May 2017 WannaCry ransomware attack in which a worm rapidly spread through a number of computer networks, affecting more than 150 countries and more than 400,000 endpoints.

One of the largest victims of the WannaCry attack was the National Health Service in England and Scotland, where up to 70,000 computers, MRI scanners, and blood-storage refrigerators may have been affected. In this global threat environment, the need for Computer Security Incident Response Teams (CSIRTs) has become ever more critical. CSIRTs are expert teams that use their specialized knowledge and skills to detect and respond to computer security incidents. In the broader internet community, these teams form a "global network" from a diverse group of organizations and sectors, such as critical infrastructure, government, industry, and academia. In this blog post, the first in a series on CSIRTS, I talk about the work of CSIRTs and their importance in the global threat landscape.

Billions of dollars in venture capital, industry investments, and government investments are going into the technology known as blockchain. It is being investigated in domains as diverse as finance, healthcare, defense, and communications. As blockchain technology has become more popular, programming-language security issues have emerged that pose a risk to the adoption of cryptocurrencies and other blockchain applications. In this post, I describe a new programming language, Obsidian, which we at the SEI are developing in partnership with Carnegie Mellon University (CMU) writing secure smart contracts in blockchain platforms.

This post was co-authored by Cecilia Albert and Harry Levinson.

At the SEI we have been involved in many programs where the intent is to increase the capability of software systems currently in sustainment. We have assisted government agencies who have implemented some innovative contracting and development strategies that provide benefits to those programs. The intent of the blog is to explain three approaches that could help others in the DoD or federal government agencies who are trying to add additional capability to systems that are currently in sustainment. Software sustainment activities can include correcting known flaws, adding new capabilities, updating existing software to run on new hardware (often due to obsolescence issues), and updating the software infrastructure to make software maintenance easier.

In the first post in this series, I presented 10 types of application security testing (AST) tools and discussed when and how to use them. In this post, I will delve into the decision-making factors to consider when selecting an AST tool and present guidance in the form of lists that can easily be referenced as checklists by those responsible for application security testing.

IPv6 deployment is on the rise. Google reported that as of July 14 2018, 23.94 percent of users accessed its site via IPv6, up 6.16 percent from that same date in 2017. Drafted in 1998 and an Internet Standard as of July 2017, Internet Protocol 6 (IPv6) is intended to replace IPv4 in assigning devices on the internet a unique identity. Plans for IPv6 got underway after it was realized that IPv4's cap of 4.3 billion addresses would not be sufficient to cover the number of devices accessing the internet. This blog post is the first in a series aimed at encouraging IPv6 adoption, whether at the enterprise-wide level, the organizational level, or the individual, home-user level.

It's common for large-scale cyber-physical systems (CPS) projects to burn huge amounts of time and money with little to show for it. As the minimum viable product (MVP) strategy of fast and focused stands in sharp contrast to the inflexible and ponderous product planning that has contributed to those fiascos, MVP has been touted as a useful corrective. The MVP strategy has become fixed in the constellation of Agile jargon and practices. However, trying to work out how to scale MVP for large and critical CPS, I found more gaps than fit. This is the first of three blog posts highlighting an alternative strategy that I created, the Minimum Viable Capability (MVC), which scales the essential logic of MVP for CPS. MVC adapts the intent of the MVP strategy--to focus on rapidly developing and validating only essential features--to systems that are too large, too complex, or too critical for MVP.

In recent days, the VPNFilter malware has attracted attention, much of it in the wake of a May 25 public service announcement from the FBI, as well as a number of announcements from vendors and security companies. In this blog post, I examine the VPNFilter malware attack by analyzing the vulnerabilities at play, how they were exploited, and the impact on the Internet. I also outline recommendations for the next generation of small Internet of Things (IoT) device manufacturers, including home routers, which were the target of VPNFilter malware. Because this post also emphasizes the prioritization of vulnerabilities that have significant or large-scale impact, I will recap recommendations made in the March 2017 blog post on the Mirai botnet.

DoD programs continue to experience cost overruns; the inadequacies of cost estimation were cited by the Government Accountability Office (GAO) as one of the top problem areas. A recent SEI blog post by my fellow researcher Robert Stoddard, Why Does Software Cost So Much?, explored SEI work that is aimed at improving estimation and management of the costs of software-intensive systems. In this post, I provide an example of how causal learning might be used to identify specific causal factors that are most responsible for escalating costs.

Runtime assurance (RA) has become a promising technique for ensuring the safe behavior of autonomous systems (such as drones or self-driving vehicles) whose behavior cannot be fully determined at design time. The Department of Defense (DoD) is increasingly focusing on the use of complex, non-deterministic systems to address rising software complexity and the use of machine learning techniques. In this environment, assuring software correctness has become a major challenge, especially in uncertain and contested environments. This post highlights work by a team of SEI researchers to create tools and techniques that will ensure the safety of distributed cyber-physical systems.

Bugs and weaknesses in software are common: 84 percent of software breaches exploit vulnerabilities at the application layer. The prevalence of software-related problems is a key motivation for using application security testing (AST) tools. With a growing number of application security testing tools available, it can be confusing for information technology (IT) leaders, developers, and engineers to know which tools address which issues. This blog post, the first in a series on application security testing tools, will help to navigate the sea of offerings by categorizing the different types of AST tools available and providing guidance on how and when to use each class of tool.

See the second post in this series, Decision-Making Factors for Selecting Application Security Testing Tools.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in deep learning, cyber intelligence, interruption costs, digital footprints on social networks, managing privacy and security, and network traffic analysis. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

When the rate of change inside an institution becomes slower than the rate of change outside, the end is in sight. - Jack Welch

In a world of agile everything, agile concepts are being applied in areas well beyond software development. At the NDIA Agile in Government Summit held in Washington, D.C. in June, Dr. George Duchak, the Deputy Assistant Secretary of Defense for Cyber, Command & Control, Communications & Networks, and Business Systems, spoke about the importance of agility to organizational success in a volatile, uncertain, complex, and ambiguous world. Dr. Duchak told the crowd that agile software development can't stand alone, but must be woven into the fabric of an organization and become a part of the way an organization's people, processes, systems and data interact to deliver value. The business architecture must be constructed for agility.

I first wrote about agile strategic planning in my March 2012 blog post, Toward Agile Strategic Planning. In this post, I want to expand that discussion to look closer at agile strategy, or short-cycle strategy development and execution, describe what it looks like when implemented, and examine how it supports organizational performance.

Part one of this series of blog posts on the collection and analysis of malware and storage of malware-related data in enterprise systems reviewed practices for collecting malware, storing it, and storing data about it. This second post in the series discusses practices for preparing malware data for analysis and discuss issues related to messaging between big data framework components.

Citing the need to provide a technical advantage to the warfighter, the Department of Defense (DoD) has recently made the adoption of cloud computing technologies a priority. Infrastructure as code (IaC), the process and technology of managing and provisioning computers and networks (physical and/or virtual) through scripts, is a key enabler for efficient migration of legacy systems to the cloud. This blog post details research aimed at developing technology to help software sustainment organizations automatically recover the deployment baseline and create the needed IaC artifacts with minimal manual intervention and no specialized knowledge about the design of the deployed system. This project will develop technology to automatically recover a deployment model from a running system and create IaC artifacts from that model.

The growth of big data has affected many fields, including malware analysis. Increased computational power and storage capacities have made it possible for big-data processing systems to handle the increased volume of data being collected. In addition to collecting the malware, new ways of analyzing and visualizing malware have been developed. In this blog post--the first in a series on using a big-data framework for malware collection and analysis--I will review various options and tradeoffs for dealing with malware collection and storage at scale.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in virtual integration, blockchain programming, Agile DevOps, software innovations, cybersecurity engineering and software assurance, threat modeling, and blacklist ecosystem analysis. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

Well-known asymmetries pit cyber criminals with access to cheap, easy-to-use tools against government and industry organizations that must spend more and more to keep information and assets safe. To help reverse this imbalance, the SEI is conducting a study sponsored by the U.S. Office of the Director of National Intelligence to understand cyber intelligence best practices, common challenges, and future technologies that we will publish at the conclusion of the project. Through interviews with U.S.-based organizations from a variety of sectors, we are identifying tools, practices, and resources that help those organizations make informed decisions that protect their information and assets. This blog post describes preliminary findings from the interviews we have conducted so far. Our final report, which will include an anonymized look at the cyber intelligence practices of all the organizations we interviewed, will be released after the conclusion of the study in 2019.

For many DoD missions, our ability to collect information has outpaced our ability to analyze that information. Graph algorithms and large-scale machine learning algorithms are a key to analyzing the information agencies collect. They are also an increasingly important component of intelligence analysis, autonomous systems, cyber intelligence and security, logistics optimization, and more. In this blog post, we describe research to develop automated code generation for future-compatible graph libraries: building blocks of high-performance code that can be automatically generated for any future platform.

The size of aerospace software, as measured in source lines of code (SLOC), has grown rapidly. Airbus and Boeing data show that SLOC have doubled every four years. The current generation of aircraft software exceeds 25 million SLOC (MSLOC). These systems must satisfy safety-critical, embedded, real-time, and security requirements. Consequently, they cost significantly more than general-purpose systems. Their design is more complex, due to quality attribute requirements, high connectivity among subsystems, and sensor dependencies--each of which affects all system development phases but especially design, integration, and verification and validation.

Numerous tools exists to help detect flaws in code. Some of these are called flaw-finding static analysis (FFSA) tools because they identify flaws by analyzing code without running it. Typical output of an FFSA tool includes a list of alerts for specific lines of code with suspected flaws. This blog post presents our initial work on applying static analysis test suites in a novel way by automatically generating a large amount of labeled data for a wide variety of code flaws to jump-start static analysis alert classifiers (SAACs). SAACs are designed to automatically estimate the likelihood that any given alert indicates a genuine flaw.

This blog post is also authored by Forrest Shull.

Modern software systems are constantly exposed to attacks from adversaries that, if successful, could prevent a system from functioning as intended or could result in exposure of confidential information. Accounts of credit card theft and other types of security breaches concerning a broad range of cyber-physical systems, transportation systems, self-driving cars, and so on, appear almost daily in the news. Building any public-facing system clearly demands a systematic approach for analyzing security needs and documenting mitigating requirements. In this blog post, which was excerpted from a recently published technical report, we present the Hybrid Threat Modeling Method that our team of researchers developed after examining popular methods.

Cost estimation was cited by the Government Accountability Office (GAO) as one of the top two reasons why DoD programs continue to have cost overruns. How can we better estimate and manage the cost of systems that are increasingly software intensive? To contain costs, it is essential to understand the factors that drive costs and which ones can be controlled. Although we understand the relationships between certain factors, we do not yet separate the causal influences from non-causal statistical correlations. In this blog post, we explore how the use of an approach known as causal learning can help the DoD identify factors that actually cause software costs to soar and therefore provide more reliable guidance as to how to intervene to better control costs.

When considering best practices in egress filtering, it is important to remember that egress filtering is not focused on protecting your network, but rather on protecting other organizations' networks. For example, the May 2017 Wannacry Ransomware attack is believed to have exploited an exposed vulnerability in the server message block (SMB) protocol and was rapidly spread via communications over port 445. Egress and ingress filtering of port 445 would have helped limit the spread of Wannacry. In this post--a companion piece to Best Practices for Network Border Protection, which highlighted best practices for filtering inbound traffic--I explore best practices and considerations for egress filtering.

Almost 30 years ago, the SEI's CERT Coordination Center established a program that enabled security researchers in the field to report vulnerabilities they found in an organization's software or systems. But this capability did not always include vulnerabilities found on Department of Defense (DoD) sites. In 2017, the SEI helped expand vulnerability reporting to the DoD by establishing the DoD Vulnerability Disclosure program. This blog post, which was adapted from an article in the recently published 2017 Year in Review, highlights our work on this program.

This post is co-authored by Thomas Scanlon.

When a private key in a public-key infrastructure (PKI) environment is lost or stolen, compromised end-entity certificates can be used to impersonate a principal (a singular and identifiable logical or physical entity, person, machine, server, or device) that is associated with it. An end-entity certificate is one that does not have certification authority to authorize other certificates. Consequently, the scope of a compromise or loss of an end-entity private key is limited to only those certificates whose keys were lost.

Since it is the certificate that provides the identity used for authorization, authentication of a compromised certificate can lead to critical consequences, such as loss of proprietary data or exposure of sensitive information. Compromised certificates can be used as client-authentication certificates in SSL to authenticate principals associated with the certificate (e.g., a principal mapped in Active Directory, LDAP, or another database) or they may be accepted as is, depending on the service. This blog post describes strategies for how to recover and minimize consequences from the loss or compromise of an end-entity private key.

As detailed in last week's post, SEI researchers recently identified a collection of vulnerabilities and risks faced by organizations moving data and applications to the cloud. In this blog post, we outline best practices that organizations should use to address the vulnerabilities and risks in moving applications and data to cloud services.

These practices are geared toward small and medium-sized organizations; however, all organizations, independent of size, can use these practices to improve the security of their cloud usage. It is important to note that these best practices are not complete and should be complemented with practices provided by cloud service providers, general best cybersecurity practices, regulatory compliance requirements, and practices defined by cloud trade associations, such as the Cloud Security Alliance.

As we stressed in our previous post, organizations must perform due diligence before moving data or applications to the cloud. Cloud service providers (CSPs) use a shared responsibility model for security. The CSP accepts responsibility for some aspects of security. Other aspects of security are shared between the CSP and the consumer or remain the sole responsibility of the consumer.

This post details four important practices, and specific actions, that organizations can use to feel secure in the cloud.

Organizations continue to develop new applications in or migrate existing applications to cloud-based services. The federal government recently made cloud-adoption a central tenet of its IT modernization strategy. An organization that adopts cloud technologies and/or chooses cloud service providers (CSP)s and services or applications without becoming fully informed of the risks involved exposes itself to a myriad of commercial, financial, technical, legal, and compliance risks. In this blog post, we outline 12 risks, threats, and vulnerabilities that organizations face when moving application or data to the cloud. In our follow-up post, Best Practices for Cloud Security, we explore a series of best practices aimed at helping organizations securely move data and applications to the cloud.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in cyber risk and resilience management, Agile/DevOps and risk management, best practices in insider threat, and dynamic design analysis. This post also includes a link to our recently published 2017 SEI Year in Review. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

As U.S. Department of Defense (DoD) mission-critical and safety-critical systems become increasingly connected, exposure from security infractions is likewise increasing. In the past, system developers had worked on the assumption that, because their systems were not connected and did not interact with other systems, they did not have to worry about security. "Closed" system assumptions, however, are no longer valid, and security threats affect the safe operation of systems.

To address exponential growth in the cost of system development due to the increased complexity of interactions and mismatched assumptions in embedded software systems, the safety-critical system community has embraced virtual system integration and analysis of embedded systems. In this blog post, I describe our efforts to demonstrate how virtual system integration can be extended to address security concerns at the architecture level and complement code-level security analysis.

In a previous blog post, we addressed how machine learning is becoming ever more useful in cybersecurity and introduced some basic terms, techniques, and workflows that are essential for those who work in machine learning. Although traditional machine learning methods are already successful for many problems, their success often depends on choosing and extracting the right features from a dataset, which can be hard for complex data. For instance, what kinds of features might be useful, or possible to extract, in all the photographs on Google Images, all the tweets on Twitter, all the sounds of a spoken language, or all the positions in the board game Go? This post introduces deep learning, a popular and quickly-growing subfield of machine learning that has had great success on problems about these datasets, and on many other problems where picking the right features for the job is hard or impossible.

DevOps is a set of development practices that emphasizes collaboration, communication, and automation throughout the application lifecycle. In DevOps, all stakeholders--including IT operations staff, testers, developers, customers, and security personnel--are embedded from the inception of the project to its end. This blog post describes SEI research and customer engagements aimed at applying DevOps practices that are typically used at the end of the lifecycle to automate governance at the beginning of the development timeline.

In the SEI's examination of the software sustainment phase of the Department of Defense (DoD) acquisition lifecycle, we have noted that the best descriptor for sustainment efforts for software is "continuous engineering." Typically, during this phase, the hardware elements are repaired or have some structural modifications to carry new weapons or sensors. Software, on the other hand, continues to evolve in response to new security threats, new safety approaches, or new functionality provided within the system of systems. In this blog post, I will examine the intersection of three themes--product line practices, software sustainment, and public-private partnerships--that emerged during our work with one government program. I will also highlight some issues we have uncovered that deserve further discussion and research.

As the use of unmanned aircraft systems (UASs) increases, the volume of potentially useful video data that UASs capture on their missions is straining the resources of the U.S. military that are needed to process and use this data. This publicly released video is an example of footage captured by a UAS in Iraq. The video shows ISIS fighters herding civilians into a building. U.S. forces did not fire on the building because of the presence of civilians. Note that this video footage was likely processed by U.S. Central Command (CENTCOM) prior to release to the public to highlight important activities within the video, such as ISIS fighters carrying weapons, civilians being herded into the building to serve as human shields, and muzzle flashes emanating from the building.

Micro-expressions--involuntary, fleeting facial movements that reveal true emotions--hold valuable information for scenarios ranging from security interviews and interrogations to media analysis. They occur on various regions of the face, last only a fraction of a second, and are universal across cultures. In contrast to macro-expressions like big smiles and frowns, micro-expressions are extremely subtle and nearly impossible to suppress or fake. Because micro-expressions can reveal emotions people may be trying to hide, recognizing micro-expressions can aid DoD forensics and intelligence mission capabilities by providing clues to predict and intercept dangerous situations. This blog post, the latest highlighting research from the SEI Emerging Technology Center in machine emotional intelligence, describes our work on developing a prototype software tool to recognize micro-expressions in near real-time.