The development and operation of secure systems continue to present significant technical challenges. To assist in addressing these challenges, the National Security Agency initiated a coordinated set of focused research activities undertaken under the auspices of four Science of Security Lablets, which are sited at the University of Illinois at Urbana-Champaign, North Carolina State University, University of Maryland, and Carnegie Mellon University. More information about the Lablets can be found at the Science of Security Virtual Organization.
The Lablets share a broad common goal, which is to develop the foundations for security science, with a focus on advancing solutions to a selection of the most difficult technical problems. The goal is to develop foundations for the science of security—an explicit intellectual framework of process and methods for advancing scientific understanding and accelerating the transition of scientific results into practice. The Illinois Lablet will address the five “hard problems” listed below, with a particular emphasis on predictive metrics.
- Scalability and Composability: The challenge of this problem is to develop methods enabling the construction of secure systems with known security properties.
- Policy-Governed Secure Collaboration: Projects addressing this hard problem seek to develop methods to express and enforce normative requirements and policies for handling data with differing usage needs and among users in different authority domains.
- Predictive Security Metrics: The challenge of this problem is to develop security metrics and models capable of predicting whether or confirming that a given cyber system preserves a given set of security properties (deterministically or probabilistically), in a given context.
- Resilient Architectures: The challenge of developing the means to design and analyze systems architectures that deliver required service in the face of compromised components.
- Human Behavior: Modeling human behavior is a daunting task, and projects addressing this hard problem seek to develop models of human behavior (of both users and adversaries) that enable the design, modeling, and analysis of systems with specified security properties.