icon-carat-right menu search cmu-wordmark

Five Reasons the Cybersecurity Field Needs Trusted Data Sets and Meaningful Metrics

CITE

Cybersecurity is a domain rich with data, but regrettably often only poor insights can be drawn from this richness. CISOs ask questions about how best to allocate resources to address threats, practitioners ask questions about how to measure the effectiveness of one solution over another, senior organizational leaders strive to identify and quantify organizational risks, and public officials work to inform organizational or national policy. Answers often involve anecdotes, small exemplars often generalized beyond their intended use, or weakly coupled analogies. Progress is hard to track at any level, which often impedes the willingness and ability to invest time or resources into efforts that might reduce the risks and increase the resilience of individuals, organizations, regions, and even specific technologies. This blog post describes how dedicating resources to building the trusted data sets and identifying the metrics that have benefitted many other fields of study will also benefit cybersecurity.

Statistics have benefited many fields of study

Decision making is improved by access to robust and reliable information. Solutions rooted in empirically rigorous measurement are much more likely to be effective than those derived from codifying the conventional wisdom of a community. This lesson is as important to cybersecurity as it is to all other domains. In Foundational Cybersecurity Research: Improving Science, Engineering, and Institutions, the National Academies of Sciences, Engineering, and Medicine advocate for the improvement of cybersecurity science by integrating "the social, behavioral, and decision sciences into the security science research effort." It also argues that supporting the development of security science will require more "material resources and institutional structures" to enable this work. The National Academies call on cybersecurity researchers and practitioners to adopt methods of inquiry that produce better data and meet the core requirements of scientific validity.

The evolution of cybersecurity--including significant progress in risk reduction--will be stunted without a deliberate and significant focus on the quality and quantity of cybersecurity data. Making progress toward this objective requires an interdisciplinary approach, as advocated by the National Academies, and necessitates an examination of the measurement practices currently operating in cybersecurity. Fundamental to this maturation of cybersecurity measurement is the creation and operation of trusted data sources.

Many fields of study have benefited from developing methods to collect, measure, and analyze large amounts of data to create useful observations, explanations, and hypotheses about the future. For example, actuarial science helps insurers measure the relationship between insurance coverage and health outcomes and helps the government assess the impact of changes in social welfare programs. Econometrics applies statistical methods to economic data for a wide range of purposes; businesses use it to help decide what to produce, portfolio managers use it to model asset returns, and governments use it to guide monetary policy. Economists and psychologists expanded the concept to behavioral econometrics, which has enabled scientists to better understand "human judgment and decision-making under uncertainty." Environmental science uses data about air and greenhouse gas emissions by industry to evaluate environmental pressures from economic activities and data on biodiversity to track levels of threats to species. In medicine, statistics help professionals track the spread of disease and determine the effectiveness of treatments. And sabermetrics, the empirical analysis of baseball statistics, has improved elements of the game from pitching and hitting styles to scouting objectives.

The U.S. Centers for Disease Control and Prevention and the U.S. Department of Labor actively curate data for their respective stakeholders. These data sources are universally acknowledged as essential to research and policy creation. The measurement and data collection functions within these entities were created in response to a need for a common trusted repository. Cybersecurity currently lacks an equivalent repository and as a result draws from a wide array of data sources. Many of these sources are populated with items of unknown or dubious provenance. This observation does not imply that these shortcomings result from malicious intent; rather, they are the product of a discipline struggling to establish scientific rigor in a highly fluid environment. What follows are five ways that trusted data sets and meaningful metrics will improve cybersecurity.

Managing risk

The cybersecurity industry has tools to collect data, quantify and prioritize risks, and support the analysis of threats to enterprises and technical vulnerabilities in systems and networks. The CERT Division has been engaged in the identification and quantification of risk for several decades. The CERT OCTAVE Allegro risk assessment methodology is designed to identify and measure enterprise information risks. This methodology is augmented by a catalog of specialized methods to identify and mitigate sources of insider threat, both malicious and unintentional. CERT thinking about measurement also extends to the challenge of aligning cybersecurity capabilities with key business drivers. The Goal-Question-Indicator-Metric (GQIM) process frames fundamental questions about cybersecurity risk and reward for nontechnical audiences.

For more than 15 years, the SEI has collected malware samples in the CERT Artifact Catalog, analyzed those threats, and identified methods for automating the collection and analysis of malware. Based on that data set of more than 60 million artifacts collected from thousands of different sources, we developed a set of malware analysis tools to further help organizations protect against threats to their systems and networks. Static analysis tools automate identification of some types of defects in software products, including those that create security vulnerabilities, and produce alerts about possible flaws in the source code. SEI researchers have developed tools that analyze alert data, classify the alerts as true- or false-positive, and strategically prioritize them for human examination.

Regulating an evolving threat landscape

High-profile cyber incidents have revealed fraud, disclosed intellectual property, and exposed the personally identifiable information of millions of people, negatively affecting the reputations of large businesses and organizations. The increasingly pervasive role of software in our lives and the ever-changing threat landscape have only raised these risks. Meanwhile, more organizations are outsourcing support and processing activities, and the supply chain for software products and the hardware that contains them has grown more complex. These distributed services and third-party suppliers increase the enterprise attack surface.

Financial sector regulators, for example, are reacting to these threats by creating new regulations, especially regulations that address third-party cybersecurity. Unfortunately, some of this guidance is inconsistent and may involve conflicting approaches. Organizations continue to wonder whether they have invested in effective cybersecurity measures in a growing, but ill-defined, threat environment. Regulations related to cybersecurity have largely focused on measuring conformance with various practices. Organizations in turn focus on complying with the requirements of external accountability. Instead, regulations should focus on efficacy and be based on achieving performance objectives identified by bodies of practice. Then organizations can focus on increasing benefits for their users. If we transcend measuring conformance and measure performance, then regulations using the right metrics can drive substantial improvements in cybersecurity.

Transforming insurance underwriting

The process of insurance underwriting is data-intensive and rooted in detailed measurement of key attributes. This industry followed a long trajectory of learning and improvement to determine which data was truly predictive and how to reliably collect the necessary inputs. In 1898 the Travelers Insurance Company issued the first automobile policy in the United States. This predated the first U.S. automobile fatality by 12 months. The insurance industry has often needed to take on the challenge of making risk decisions with initially limited data. Underwriting cybersecurity risks in 2020 and nascent automobile risks in 1898 are not dissimilar in many respects.

The U.S. Department of Homeland Security has explored this topic in collaboration with the insurance industry. This analysis points to the absence of a common cyber incident data repository as an enduring challenge. Subject-matter experts offered several desirable characteristics for a cyber incident repository:

  1. validation of incident consequences
  2. historical information that demonstrates loss patterns
  3. mapping key dependencies across sectors
  4. ability to monitor risk accumulation across sectors

Market forces are leading many insurers to ask fewer rather than more questions of an organization seeking cyber coverage. The survey instruments used by the insurers vary widely in depth and complexity. As a consequence, these organizations typically have a less detailed understanding of an insured organization's cybersecurity posture than they do for other risk elements, which increases the role of past claims data in making future risks decisions. This limitation should be considered when cybersecurity researchers and practitioners enumerate trustworthy data sources.

Adopting security programs

Over the last several decades, cybersecurity has been treated as

  • a technical challenge: Just implement a range of solutions from different classes of protection or defense solutions.
  • an engineering challenge: Security is an attribute to be stated as a requirement in systems design and development.
  • a human problem: Train the users better and they won't click on spam email.
  • a leadership problem: If the leadership understands the threats, they will make sure the issue is solved.

In fact, reducing the risk of connected technologies and increasing the overall resilience of business and mission operations and their associated data and systems require each of those things and more. Recent studies have investigated the most impactful decisions that increase systemic protections and reduce overall vulnerabilities. The backward-looking view highlighted key design decisions, including auto-update, that reduced the overall cost of basic actions such as sustaining cyber hygiene. It also reinforced the idea that solutions that scale and place a low burden on the end user are often highly impactful.

Adoption of security programs is uneven, however, and many of the current efforts to drive adoption involve collecting and sharing threat information. Understanding and aligning security programs to threats is certainly important. But a foundation of data that supports the allocation of resources and measure progress in adoption would enable more effective organizational, sector, regional, and technical policies and increase the overall resilience and security.

Informing national policy

The United States and other nations are grappling with a series of important cybersecurity policy and legal questions: What is the best division of responsibility for cybersecurity between governments and the private sector? How can we hold individuals, organizations, and governments accountable for flaws in systems and software? Which legal frameworks are appropriate to drive accountability? The complexity grows when data crosses international boundaries, and accountable individuals and organizations operate in nations with different laws. In At the Nexus of Cybersecurity and Public Policy, the National Research Council and collaborators explain that cybersecurity policy must function in balance with other policy concerns; for example, strengthening cybersecurity may protect privacy or violate privacy, depending on what defenses are applied.

In the United States, cybersecurity policy has focused largely on constructing and operating mechanisms to share threat and vulnerability intelligence. Reporting is a vital component in the mosaic of data needed to reason about cybersecurity. However, these data sets are far from the only pieces required to comprehensively analyze cybersecurity in a national context. Detailed information about past cyber incidents should feature prominently in analysis efforts. These data sets typically contain gaps and lack the standardization required to employ advanced data science. A persuasive critical blind spot in national cybersecurity data is failure to understand the posture of organizations and sectors before an incident occurs. Retroactively determining the capabilities of an organization after a disruptive event is not a reliable way to measure performance.

Informing national policy in meaningful way in the future will require the improved collection and organizing of data in three broad categories:

  1. threat and vulnerability intelligence
  2. incident data focused on tangible impacts
  3. measurement of the performance of key capabilities before and during incidents

The combination of these data sources will yield insights beyond the sum of their parts. Depending on silos of untrustworthy data will lead in the best-case scenario to viewing national cybersecurity challenges with a narrow and flawed aperture. In the worst-case scenario, it will lead to allocating resources to unproductive endeavors and creating a false sense of safety. And that scenario further complicates the national understanding and ability to effectively navigate and deter adversaries in the complex world.

Looking Ahead

The poet and anthropologist Andrew Lang famously accused a rival of using "statistics as a drunken man uses lamp posts--for support rather than for illumination." The goal of applying statistics and metrics in cybersecurity must be illumination and not the unexamined confirmation of conventional wisdom. Substantive progress in the domain requires a renewed focus on the veracity and completeness of data. These are the prerequisites for the scientific inquiry acknowledged as a collective blind spot in the profession.

Exploring the limitations of traditional data sources and accepted metrics will undoubtedly be unsettling for some, but it is the price of progress. The promise of a cybersecurity future shaped by better metrics and data is finding both lasting efficiencies and new techniques to profoundly thwart the adversary. A key impediment on this path to improved cybersecurity metrics and measurement is the lack of a large-scale, shared, and trusted data repository. Such a repository would deliver benefits to researchers and practitioners as evidenced in other domains. Improved design of cybersecurity experiments--as suggested by the National Academies--coupled with this repository would permit an important maturation. Ultimately, the insights generated by better science will be translated into better measurements and metrics employed by all who seek to advance the state of the practice of cybersecurity.

Additional resources

Read the blog post Selecting Measurement Data for Software Assurance Practices and or the technical note Exploring the Use of Metrics for Software Assurance to learn more about the Software Assurance Framework (SAF) practice.

Read the blog post Why Is Measurement so Hard? to understand why metrics should be tied to business or mission objectives.

Read the CrossTalk article Assessing DoD System Acquisition Supply Chain Risk Management to learn more about the growing challenge of cyber risks in the defense supply chain.

Read the conference paper Goal-Based Assessment for the Cybersecurity of Critical Infrastructure to find out how assurance case can improve cybersecurity risk assessment.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed