Category: Artificial Intelligence

In 2017 and 2018, the United States witnessed a milestone year of climate and weather-related disasters from droughts and wildfires to cyclones and hurricanes. Increasingly, satellites are playing an important role in helping emergency responders assess the damage of a weather event and find victims in its aftermath. Most recently satellites have tracked the devastation wrought by the California wildfires from space. The United States military, which is often the first on the scene of a natural disaster, is increasingly interested in the use of deep learning to automate the identification of victims and structures in satellite imagery to assist with humanitarian assistance disaster relief (HADR) efforts.

Statistics and machine learning often use different terminology for similar concepts. I recently confronted this when I began reading about maximum causal entropy as part of a project on inverse reinforcement learning. Many of the terms were unfamiliar to me, but as I read closer, I realized that the concepts had close relationships with statistics concepts. This blog post presents a table of connections between terms that are standard in statistics and their related counterparts in machine learning.

This post was co-authored by Robert Nord.

Technical debt communicates the tradeoff between the short-term benefits of rapid delivery and the long-term value of developing a software system that is easy to evolve, modify, repair, and sustain. Like financial debt, technical debt can be a burden or an investment. It can be a burden when it is taken on unintentionally without a solid plan to manage it; it can also be part of an intentional investment strategy that speeds up development, as long as there is a plan to pay back the debt before the interest swamps the principal.

For many DoD missions, our ability to collect information has outpaced our ability to analyze that information. Graph algorithms and large-scale machine learning algorithms are a key to analyzing the information agencies collect. They are also an increasingly important component of intelligence analysis, autonomous systems, cyber intelligence and security, logistics optimization, and more. In this blog post, we describe research to develop automated code generation for future-compatible graph libraries: building blocks of high-performance code that can be automatically generated for any future platform.

Cost estimation was cited by the Government Accountability Office (GAO) as one of the top two reasons why DoD programs continue to have cost overruns. How can we better estimate and manage the cost of systems that are increasingly software intensive? To contain costs, it is essential to understand the factors that drive costs and which ones can be controlled. Although we understand the relationships between certain factors, we do not yet separate the causal influences from non-causal statistical correlations. In this blog post, we explore how the use of an approach known as causal learning can help the DoD identify factors that actually cause software costs to soar and therefore provide more reliable guidance as to how to intervene to better control costs.

In a previous blog post, we addressed how machine learning is becoming ever more useful in cybersecurity and introduced some basic terms, techniques, and workflows that are essential for those who work in machine learning. Although traditional machine learning methods are already successful for many problems, their success often depends on choosing and extracting the right features from a dataset, which can be hard for complex data. For instance, what kinds of features might be useful, or possible to extract, in all the photographs on Google Images, all the tweets on Twitter, all the sounds of a spoken language, or all the positions in the board game Go? This post introduces deep learning, a popular and quickly-growing subfield of machine learning that has had great success on problems about these datasets, and on many other problems where picking the right features for the job is hard or impossible.

As the use of unmanned aircraft systems (UASs) increases, the volume of potentially useful video data that UASs capture on their missions is straining the resources of the U.S. military that are needed to process and use this data. This publicly released video is an example of footage captured by a UAS in Iraq. The video shows ISIS fighters herding civilians into a building. U.S. forces did not fire on the building because of the presence of civilians. Note that this video footage was likely processed by U.S. Central Command (CENTCOM) prior to release to the public to highlight important activities within the video, such as ISIS fighters carrying weapons, civilians being herded into the building to serve as human shields, and muzzle flashes emanating from the building.

Micro-expressions--involuntary, fleeting facial movements that reveal true emotions--hold valuable information for scenarios ranging from security interviews and interrogations to media analysis. They occur on various regions of the face, last only a fraction of a second, and are universal across cultures. In contrast to macro-expressions like big smiles and frowns, micro-expressions are extremely subtle and nearly impossible to suppress or fake. Because micro-expressions can reveal emotions people may be trying to hide, recognizing micro-expressions can aid DoD forensics and intelligence mission capabilities by providing clues to predict and intercept dangerous situations. This blog post, the latest highlighting research from the SEI Emerging Technology Center in machine emotional intelligence, describes our work on developing a prototype software tool to recognize micro-expressions in near real-time.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and presentations highlighting our work in cyber warfare, emerging technologies and their risks, domain name system blocking to disrupt malware, best practices in network border protection, robotics, technical debt, and insider threat and workplace violence. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

As organizations' critical assets have become digitized and access to information has increased, the nature and severity of threats has changed. Organizations' own personnel--insiders--now have greater ability than ever before to misuse their access to critical organizational assets. Insiders know where critical assets are, what is important, and what is valuable. Their organizations have given them authorized access to these assets and the means to compromise the confidentiality, availability, or integrity of data. As organizations rely on cyber systems to support critical missions, a malicious insider who is trying to harm an organization can do so through, for example, sabotaging a critical IT system or stealing intellectual property to benefit a new employer or a competitor. Government and industry organizations are responding to this change in the threat landscape and are increasingly aware of the escalating risks. CERT has been a widely acknowledged leader in insider threat since it began investigating the problem in 2001. The CERT Guide to Insider Threat was inducted in 2016 into the Palo Alto Networks Cybersecurity Canon, illustrating its value in helping organizations understand the risks that their own employees pose to critical assets. This blog post describes the challenge of insider threats, approaches to detection, and how machine learning-enabled software helps provide protection against this risk.

The Department of Defense is increasingly relying on biometric data, such as iris scans, gait recognition, and heart-rate monitoring to protect against both cyber and physical attacks. "Military planners, like their civilian infrastructure and homeland security counterparts, use video-linked 'behavioral recognition analytics,' leveraging base protection and counter-IED operations," according to an article in Defense Systems. Current state-of-the-art approaches do not make it possible to gather biometric data in real-world settings, such as border and airport security checkpoints, where people are in motion. This blog post presents the results of exploratory research conducted by the SEI's Emerging Technology Center to design algorithms that extract heart rate from video of non-stationary subjects in real time.

The year 2016 witnessed advancements in artificial intelligence in self-driving cars, language translation, and big data. That same time period, however, also witnessed the rise of ransomware, botnets, and attack vectors as popular forms of malware attack, with cybercriminals continually expanding their methods of attack (e.g., attached scripts to phishing emails and randomization), according to Malware Byte's State of Malware report. To complement the skills and capacities of human analysts, organizations are turning to machine learning (ML) in hopes of providing a more forceful deterrent. ABI Research forecasts that "machine learning in cybersecurity will boost big data, intelligence, and analytics spending to $96 billion by 2021." At the SEI, machine learning has played a critical role across several technologies and practices that we have developed to reduce the opportunity for and limit the damage of cyber attacks. In this post--the first in a series highlighting the application of machine learning across several research projects--I introduce the concept of machine learning, explain how machine learning is applied in practice, and touch on its application to cybersecurity throughout the article.

Since its debut on Jeopardy in 2011, IBM's Watson has generated a lot of interest in potential applications across many industries. I recently led a research team investigating whether the Department of Defense (DoD) could use Watson to improve software assurance and help acquisition professionals assemble and review relevant evidence from documents. As this blog post describes, our work examined whether typical developers could build an IBM Watson application to support an assurance review.

This post was co-authored by Sagar Chaki

In 2011, the U.S. Government maintained a fleet of approximately 8,000 unmanned aerial systems (UAS), commonly referred to as "drones," a number that continues to grow. "No weapon system has had a more profound impact on the United States' ability to provide persistence on the battlefield than the UAVs," according to a report from the 2012 Defense Science Board. Making sure government and privately owned drones share international air space safely and effectively is a top priority for government officials. Distributed Adaptive Real-Time (DART) systems are key to many areas of Department of Defense (DoD) capability, including the safe execution of autonomous, multi-UAS missions having civilian benefits. DART systems promise to revolutionize several such areas of mutual civilian-DoD interest, such as robotics, transportation, energy, and health care. To fully realize the potential of DART systems, however, the software controlling them must be engineered for high-assurance and certified to operate safely and effectively. In short, these systems must satisfy guaranteed and highly-critical safety requirements (e.g., collision avoidance) while adapting smartly to achieve application requirements, such as protection coverage, while operating in dynamic and uncertain environments. This blog post describes our architecture and approach to engineering high-assurance software for DART systems.

An autonomous system is a computational system that performs a desired task, often without human guidance. We use varying degrees of autonomy in robotic systems for manufacturing, exploration of planets and space debris, water treatment, ambient sensing, and even cleaning floors. This blog post discusses practical autonomous systems that we are actively developing at the SEI. Specifically, this post focuses on a new research effort at the SEI called Self-governing Mobile Adhocs with Sensors and Handhelds (SMASH) that is forging collaborations with researchers, professors, and students with the goal of enabling more effective search-and-rescue crews.