Supply Chain Risk Management, Network Situational Awareness, Software Architecture, and Network Time Protocol: The Latest Work from the SEI
TAGSAutonomy and Counter-Autonomy Cyber Missions Mission Assurance Software and Information Assurance Systems Verification and Validation
As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI technical reports, white papers, podcasts and webinars on supply chain risk management, process improvement, network situational awareness, software architecture, network time protocol as well as a podcast interview with SEI Fellow Peter Feiler. These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
Assessing DoD System Acquisition Supply Chain Risk Management By John Haller, Charles M. Wallen, Carol Woody, PhD, Christopher J. Alberts
Defense capabilities are supported by complex supply chains. This is true for weapons systems and large "systems of systems" that enable force projection -- for example, a weapons system like the F-35 Fighter. It is also true for service supply chains -- for example, the array of private logistics firms that the Department of Defense (DoD) relies upon to transport personnel and equipment around the world. Important requirements for both capabilities (force projection and transportation) now depend on the cybersecurity and related assurance level of third parties. While supplier, vendor, and contracts relationships provide cost savings and flexibility to the DoD, they also come with risks.
Read the Article.
NTP Best Practices By Timur D. Snoke
The network time protocol (NTP) synchronizes the time of a computer client or server to another server or within a few milliseconds of Coordinated Universal Time. NTP servers, long considered a foundational service of the Internet, have more recently been used to amplify large-scale distributed denial of service (DDoS) attacks. While 2016 did not see a noticeable uptick in the frequency of DDoS attacks, the last 12 months have witnessed some of the largest DDoS attacks, according to Akamai's State of the Internet/Security report. One issue that attackers have exploited is abusable NTP servers. In 2014, there were more than seven million abusable NTP servers. As a result of software upgrades, repaired configuration files, or the simple fact that ISPs and IXPs have decided to block NTP traffic, the number of abusable servers dropped by almost 99 percent in a matter months, according to a January 2015 article in ACM Queue. But there is still work to be done. It only takes 5,000 abusable NTP servers to generate a DDoS attack in the range of 50-400 Gbps. In this podcast, Timur Snoke explores the challenges of NTP and prescribes some best practices for securing accurate time with this protocol.
View the podcast.
IEEE Computer Society/Software Engineering Institute Watts S. Humphrey Software Process Achievement Award 2016: Raytheon Integrated Defense Systems
By Neal Mackertich (Raytheon), Peter Kraus (Raytheon), Kurt Mittelstaedt (Raytheon), Brian Foley (Raytheon), Dan Bardsley (Raytheon), Kelli Grimes (Raytheon), Mike Nolan (Raytheon)
Design for Six Sigma (DFSS) is an industry recognized methodology used by Raytheon's Integrated Product Development System to predict, manage, and improve software-intensive system performance, producibility, and affordability. The Raytheon Integrated Defense Systems DFSS team has developed and implemented numerous leading-edge improvement and optimization methodologies resulting in significant business results. These achievements have been recognized by the Software Engineering Institute and IEEE with the 2016 Watts Humphrey Software Process Achievement Award. Best practice approaches used by the team are shared in this report, including the generation of highly effective and efficient test cases using Design of Experiments, process performance modeling and analysis, and cost and schedule risk analysis using Monte Carlo simulation.
Read the technical report.
For nearly 10 years Nationwide IT has been on a software process improvement journey in pursuit of increased quality, productivity, and predictability. By deploying and scaling a blend of Agile and Lean concepts and a unique team model, as well as fostering a problem solving and learning culture, Nationwide IT has produced significant business outcomes and demonstrated increasing employee engagement. These achievements have positioned Nationwide IT to scale Agile to an enterprise level. In 2016 the Carnegie Mellon University Software Engineering Institute and IEEE recognized Nationwide IT with the Watts Humphrey Software Process Achievement Award. For more information on the SPA Award, visit http://www.computer.org/portal/web/awards/technical.
Read the technical report.
Network flow records provide a useful overview of traffic on a network that uses the Internet protocol to pass information. Huge numbers of bytes and thousands of packets can be summarized by a relatively small number of records, with few privacy concerns and a small record size (which aids both speed of retrieval and duration of storage). However, examining these records to build an awareness of the security situation on a network requires automation, and it can be daunting to develop a process for building the automated analytics. This webinar presents such a development process, outlining how to determine what to analyze, how to analyze it in an automated manner, and issues involved in validating and interpreting the results.
View the webinar.
SEI Fellows Series: Peter H. Feiler
by Peter H. Feiler
The position of SEI Fellow is awarded to people who have made an outstanding contribution of the work of the SEI and from whom the SEI leadership may expect valuable advice for continued success in the institute's mission. Peter Feiler was named an SEI Fellow in August 2016. This podcast is the second in a series highlighting interviews with SEI Fellows.
View the podcast.
This conference paper will appear in the Proceedings of the IEEE International Conference on Software Architecture (ICSA 2017).
Design problems, frequently the result of optimizing for delivery speed, are a critical part of long-term software costs. Automatically detecting such design issues is a high priority for software practitioners. Software quality tools promise automatic detection of common software quality rule violations. However, since these tools bundle a number of rules, including rules for code quality, it is hard for users to understand which rules identify design issues in particular. Research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classified 466 software quality rules from three industry tools--CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either non-design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, to support automatic detection.
- Supplemental Materials
Design Rule Classification Rubric and Guidance: This document provides detailed guidance for how to use our classification scheme to identify design rules in static analysis tools.
Design Rules and Analysis Spreadsheet: This Excel document contains lists of design rules from the static analysis tools analyzed in this study as well as final reconciliation results.
Read the conference paper Additional Resources
View the latest SEI research in the SEI library
View the latest installments in the SEI Podcast Series.
View the latest installments in the SEI Webinar Series.
This post has been shared 0 times.