search menu icon-carat-right cmu-wordmark

Autonomy, Robotics, Verification, DDoS Attacks, and Software Testing: The Top 10 Posts of 2016

CITE

As we have done each year since the blog's inception in 2011, this blog post presents the10 most-visited posts in 2016 in descending order ending with the most popular post. While the majority of our most popular posts were published in the last 12 months, a few, such as Don Firesmith's 2013 posts about software testing, continue to be popular with readers.

10. Verifying Software with Timers and Clocks
9. 10 At-Risk Emerging Technologies
8. Structuring the Chief Information Security Officer Organization
7. Designing Insider Threat Programs 6. Three Roles and Three Failure Patterns of Software Architects 5. Why Did the Robot Do That?
4. Agile Metrics: Seven Categories
3. Common Testing Problems: Pitfalls to Prevent and Mitigate
2. Distributed Denial of Service: Four Best Practices for Prevention and Response
1. Using V Models for Testing

10. Verifying Software with Timers and Clocks
By Sagar Chaki and Dionisio DeNiz
December 12, 2016

Software with timers and clocks (STACs) exchange clock values to set timers and perform computation. STACs are key elements of safety-critical systems that make up the infrastructure of our daily lives. They are particularly used to control systems that interact (and must be synchronized) with the physical world. Examples include avionics systems, medical devices, cars, cell phones, and other devices that rely on software not only to produce the right output, but also to produce it at the correct time. An airbag, for example, must deploy as intended, but just as importantly, it must deploy at the right time. Thus, when STACs fail to operate as intended in the safety-critical systems that rely on them, the result can be significant harm or loss of life. Within the Department of Defense (DoD), STACs are used widely, ranging from real-time thread schedulers to controllers for missiles, fighter planes, and aircraft carriers. In this blog post Sagar Chaki and Dionisio DeNiz present exploratory research to formally verify safety properties of sequential and concurrent STACs at the source-code level.
Read the complete post.

9. 10 At-Risk Emerging Technologies
By Christopher King
May 9, 2016

In today's increasingly interconnected world, the information security community must be prepared to address vulnerabilities that may arise from new technologies. Understanding trends in emerging technologies can help information security professionals, leaders of organizations, and others interested in information security identify areas for further study. Researchers in the SEI's CERT Division recently examined the security of a large swath of technology domains being developed in industry and maturing over the next five years. Our team of analysts--Dan Klinedinst, Todd Lewellen, Garret Wassermann, and Chris King--focused on identifying domains that not only impacted cybersecurity, but finance, personal health, and safety, as well. This blog post highlights the findings of our report prepared for the Department of Homeland Security United States Computer Emergency Readiness Team (US-CERT) and provides a snapshot of our current understanding of future technologies.
Read the complete post.

8. Structuring the Chief Information Security Officer Organization
by Nader Mehravari and Julia Allen
February 22, 2016

Most organizations, no matter the size or operational environment (government or industry), employ a senior leader responsible for information security and cybersecurity. In many organizations, this role is known as chief information security officer (CISO) or director of information security. CISOs and others in this position increasingly find that traditional information security strategies and functions are no longer adequate when dealing with today's expanding and dynamic cyber-risk environment. Publications abound with opinions and research expressing a wide range of functions that a CISO organization should govern, manage, and perform. Making sense of all this and deciding on an approach that is appropriate for your specific organization's business, mission, and objectives can prove challenging. In this blog post, Nader Mehravari and Julia Allen present recent research on this topic, including a CISO framework for a large, diverse, U.S. national organization. This framework is the product of interviews with CISOs and an examination of policies, frameworks, maturity models, standards, codes of practice, and lessons learned from cybersecurity incidents.
Read the complete post.

7. Designing Insider Threat Programs
by Andrew P. Moore
September 29, 2014

Insider threat is the threat to organization's critical assets posed by trusted individuals--including employees, contractors, and business partners--authorized to use the organization's information technology systems. Insider threat programs within an organization help to manage the risks due to these threats through specific prevention, detection, and response practices and technologies. The National Industrial Security Program Operating Manual (NISPOM), which provides baseline standards for the protection of classified information, is considering proposed changes that would require contractors that engage with federal agencies, which process or access classified information, to establish insider threat programs.

The proposed changes to the NISPOM were preceded by Executive Order 13587, Structural Reforms to Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information. Signed by President Obama in September 2011, Executive Order 13587 requires federal agencies that operate or access classified computer networks to implement insider threat detection and prevention programs.

Since the passage of Executive Order 13587, the following key resources have been developed:

  • The National Insider Threat Task Force developed minimum standards for implementing insider threat programs. These standards include a set of questions to help organizations conduct insider threat self-assessments.
  • The Intelligence and National Security Alliance conducted research to determine the capabilities of existing insider threat programs
  • The Intelligence Community Analyst-Private Sector Partnership Program developed a roadmap for insider threat programs.

CERT's insider threat program training and certificate programs are based on the above resources as well as CERT's own Insider Threat Workshop, common sense guidelines for mitigating insider threats, and in-depth experience and insights from helping organizations establish computer security incident response teams. As described in this blog post by Andrew P. Moore, researchers from the Insider Threat Center at the Carnegie Mellon University Software Engineering Institute are also developing an approach based on organizational patterns to help agencies and contractors systematically improve the capability of insider threat programs to protect against and mitigate attacks.
Read the complete post.

6. Three Roles and Three Failure Patterns of Software Architects by John Klein
March 21, 2016

When John Klein was a chief architect working in industry, he was repeatedly asked the same questions: What makes an architect successful? What skills does a developer need to become a successful architect? There are no easy answers to these questions. For example, in his experience, architects are most successful when their skills and capabilities match a project's specific needs. Too often, in answering the question of what skills make a successful architect, the focus is on skills such as communication and leadership. While these are important, an architect must have strong technical skills to design, model, and analyze the architecture. As this post will explain, as a software system moves through its lifecycle, each phase calls for the architect to use a different mix of skills. This post also identifies three failure patterns that Klein has observed working with industry and government software projects.
Read the complete post.

5. Why Did the Robot Do That?
by Stephanie Rosenthal
December 5, 2016

The growth and change in the field of robotics in the last 15 years is tremendous, due in large part to improvements in sensors and computational power. These sensors give robots an awareness of their environment, including various conditions such as light, touch, navigation, location, distance, proximity, sound, temperature, and humidity. The increasing ability of robots to sense their environments makes them an invaluable resource in a growing number of situations, from underwater explorations to hospital and airport assistants to space walks. One challenge, however, is that uncertainty persists among users about what the robot senses; what it predicts about its state and the states of other objects and people in the environment; and what it believes its outcomes will be from the actions it takes. In this blog post, Stephanie Rosenthal describes research that aims to help robots explain their behaviors in plain English and offer greater insights into their decision making.
Read the complete post.

4. Agile Metrics: Seven Categories
By Will Hayes
September 22, 2014

More and more, suppliers of software-reliant Department of Defense (DoD) systems are moving away from traditional waterfall development practices in favor of agile methods. As described in previous posts on this blog, agile methods are effective for shortening delivery cycles and managing costs. If the benefits of agile are to be realized effectively for the DoD, however, personnel responsible for overseeing software acquisitions must be fluent in metrics used to monitor these programs. In this blog post, Will Hayes highlights the results of an effort by researchers at the Carnegie Mellon University Software Engineering Institute to create a reference for personnel who oversee software development acquisition for major systems built by developers applying agile methods. This post also presents seven categories for tracking agile metrics.
Read the complete post.

3. Common Testing Problems: Pitfalls to Prevent and Mitigate
By Donald Firesmith
April 5, 2013

A widely cited study for the National Institute of Standards & Technology (NIST) reports that inadequate testing methods and tools annually cost the U.S. economy between $22.2 and $59.5 billion, with roughly half of these costs borne by software developers in the form of extra testing and half by software users in the form of failure avoidance and mitigation efforts. The same study notes that between 25 and 90 percent of software development budgets are often spent on testing. This posting, the first in a two-part series by Donald Firesmith, highlights results of an analysis that documents problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories, lists potential symptoms by which each can be recognized, potential negative consequences, potential causes, and makes recommendations for preventing them or mitigating their effects.
Read the complete post.

2. Distributed Denial of Service Attacks: Four Best Practices for Prevention and Response
By Rachel Kartch
November 21, 2016

In October 2016, Internet users across the eastern seaboard of the United States had trouble accessing popular websites, such as Reddit, Netflix, and the New York Times. As reported in Wired Magazine, the disruption was the result of multiple distributed denial of service (DDoS) attacks against a single organization: Dyn, a New Hampshire-based Internet infrastructure company.

DDoS attacks can be extremely disruptive, and they are on the rise. The Verisign Distributed Denial of Service Trends Report states that DDoS attack activity increased 85 percent in each of the last two years with 32 percent of those attacks in the fourth quarter of 2015 targeting IT services, cloud computing, and software-as-a-service companies. In this blog post, Rachel Kartch provides an overview of DDoS attacks and best practices for mitigating and responding to them based on cumulative experience in this field.
Read the complete post.

1. Using V Models for Testing
By Donald Firesmith
November 11, 2013

The verification and validation of requirements are critical parts of systems and software engineering. The importance of verification and validation (especially testing) is a major reason that the traditional waterfall development cycle underwent a minor modification to create the V model that links early development activities to their corresponding later testing activities. This blog post introduces three variants on the V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.
Read the complete post.

Wrapping Up 2016 and Looking Ahead

This has been a great year for the SEI Blog. We are looking forward to weekly posts highlighting the work of SEI and CERT researchers in 2017 and beyond. Some highlights to look forward to include:

  • James Edmondson will write about software solutions for distributed robotics in space.
  • Will Klieber will write about his work on automated code repair.
  • Rotem Guttman will write about his work on cyber-kinetic effects integration.

As always, we welcome your ideas for future posts and your feedback on those already published. Please leave feedback in the comments section below.

Additional Resources

Download the latest publications from SEI researchers at our digital library
https://resources.sei.cmu.edu/library/.

CITE

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed