Home
The development and operation of secure systems continue to present significant technical challenges. To assist in addressing these challenges, the National Security Agency initiated a coordinated set of focused research activities undertaken under the auspices of six Science of Security Lablets, which are sited at the University of Illinois at Urbana-Champaign, North Carolina State University, Carnegie Mellon University, University of Kansas, Vanderbilt University and University of California Berkeley. More information about the Lablets can be found at the Science of Security Virtual Organization.
The Lablets share a broad common goal, which is to develop the foundations for security science, with a focus on advancing solutions to a selection of the most difficult technical problems. The goal is to develop foundations for the science of security—an explicit intellectual framework of process and methods for advancing scientific understanding and accelerating the transition of scientific results into practice. The Illinois Lablet will address the five “hard problems” listed below.
Science of Security Hard Problems
- Scalability and Composability: Develop methods to enable the construction of secure systems with known security properties from components with known security properties, without a requirement to fully re-analyze the constituent components.
- Policy-Governed Secure Collaboration: Develop methods to express and enforce normative requirements and policies for handling data with differing usage needs and among users in different authority domains.
- Security Metrics Driven Evaluation, Design, Development, and Deployment: Develop security metrics and models capable of predicting whether or confirming that a given cyber system preserves a given set of security properties (deterministically or probabilistically), in a given context.
- Resilient Architectures: Develop means to design and analyze system architectures that deliver required service in the face of compromised components.
- Understanding and Accounting for Human Behavior: Develop models of human behavior (of both users and adversaries) that enable the design, modeling, and analysis of systems with specified security properties.