SEI Insights

SEI Blog

The Latest Research in Software Engineering and Cybersecurity

Blockchain technology was conceived a little over ten years ago. In that short time, it went from being the foundation for a relatively unknown alternative currency to being the "next big thing" in computing, with industries from banking to insurance to defense to government investing billions of dollars in blockchain research and development. This blog post, the first of two posts about the SEI's exploration of DoD applications for blockchain, provides an introduction to this rapidly emerging technology.

When I was pursuing my master's degree in information security, two of the required classes were in cognitive psychology and human factors: one class about how we think and learn and one about how we interact with our world. Students were often less interested in these courses and preferred to focus their studies on more technical topics. I personally found them to be two of the most beneficial. In the years since I took those classes, I've worked with people in many organizations in roles where it is their job to think: security operations center (SOC) analysts, researchers, software developers, and decision makers. Many of these people are highly technical, very intelligent, and creative. In my interactions with these groups, however, the discussion rarely turns to how to think about thinking. For people whose jobs entail pulling together and interpreting data to answer a question or solve a problem (i.e. analyze), ignoring human factors and how we and others perceive, think, and remember can lead to poor outcomes. In this blog post, I will explore the importance of thinking like an analyst and introduce a framework to help guide security operations center staff and other network analysts.

The crop of Top 10 SEI Blog posts in the first half of 2017 (judged by the number of visits by our readers) represents the best of what we do here at the SEI: transitioning our knowledge to those who need it. Several of our Top 10 posts this year are from a series of posts on best practices for network security that we launched in November 2016 in the wake of the Dyn attack. In this post, we will list the Top 10 posts with an excerpt from each post as well as links to where readers can go for more information about the topics covered in the SEI blog.

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI technical reports, white papers, podcasts and webinars on supply chain risk management, process improvement, network situational awareness, software architecture, network time protocol as well as a podcast interview with SEI Fellow Peter Feiler. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

Electronic Countermeasures

During the wars in Iraq and Afghanistan, insurgents' use of improvised explosive devices (IEDs) proliferated. The United States ramped up its development of counter-IED equipment to improve standoff detection of explosives and explosive precursor components and to defeat IEDs themselves as part of a broader defense capability. One effective strategy was jamming or interrupting radio frequency (RF) communications to counter radio-controlled IEDs (RCIEDs). This approach disrupts critical parts of RF communications, making the RCIED's communication to activate ineffective, saving both warfighter and civilian lives and property. For some time now, the cyber world has also been under attack by a diffused set of enemies who improvise their own tools in many different varieties and hide them where they can do much damage. This analogy has its limitations; however, here I want to explore the idea of disrupting communications from malicious code such as ransomware that is used to lock up your digital assets, or data-exfiltration software that is used to steal your digital data.

Many organizations want to share data sets across the enterprise, but taking the first steps can be challenging. These challenges range from purely technical issues, such as data formats and APIs, to organizational cultures in which managers resist sharing data they feel they own. Data Governance is a set of practices that enable data to create value within an enterprise. When launching a data governance initiative, many organizations choose to apply best practices, such as those collected in the Data Management Association's Body of Knowledge (DAMA-BOK). While these practices define a desirable end state, our experience is that attempting to apply them broadly across the enterprise as a first step can be disruptive, expensive, and slow to deliver value. In our work with several industry and government organizations, SEI researchers have developed an incremental approach to launching data governance that delivers immediate payback. This post highlights our approach, which is based on six principles.

The future of autonomy in the military could include unmanned cargo delivery; micro-autonomous air/ground systems to enhance platoon, squad, and soldier situational awareness; and manned and unmanned teaming in both air and ground maneuvers, according to a 2016 presentation by Robert Sadowski, chief roboticist for the U.S. Army Tank Automotive Research Development and Engineering Center (TARDEC), which researches and develops advanced technologies for ground systems. One day, robot medics may even carry wounded soldiers out of battle. The system behind these feats is ROS-M, the militarized version of the Robot Operating System (ROS), an open-source set of software libraries and tools for building robot applications. In this post, I will describe the work of SEI researchers to create an environment within ROS-M for developing unmanned systems that spurs innovation and reduces development time.

The year 2016 witnessed advancements in artificial intelligence in self-driving cars, language translation, and big data. That same time period, however, also witnessed the rise of ransomware, botnets, and attack vectors as popular forms of malware attack, with cybercriminals continually expanding their methods of attack (e.g., attached scripts to phishing emails and randomization), according to Malware Byte's State of Malware report. To complement the skills and capacities of human analysts, organizations are turning to machine learning (ML) in hopes of providing a more forceful deterrent. ABI Research forecasts that "machine learning in cybersecurity will boost big data, intelligence, and analytics spending to $96 billion by 2021." At the SEI, machine learning has played a critical role across several technologies and practices that we have developed to reduce the opportunity for and limit the damage of cyber attacks. In this post--the first in a series highlighting the application of machine learning across several research projects--I introduce the concept of machine learning, explain how machine learning is applied in practice, and touch on its application to cybersecurity throughout the article.