Date: July 5, 2017
In light of recent network attacks, security automation and continuous monitoring of your network is a must. Up-to-date knowledge about the state of the network and the endpoints that comprise it is more critical than ever. Network posture information provides network defenders with the information they need to properly secure critical data, remediate vulnerabilities before they are exploited, and deflect attacks by malicious actors. The same network posture information underpins network resiliency, enabling operators to fight through and recover from network attacks when deflection fails. Automating the processes to collect this data, leverage it against network attacks, and support resilient network operation saves time and resources, thereby offering both security benefits to the network and economic benefits to network owners.
Why, then, is security automation not ubiquitous? Because proper security automation requires interoperability from a diverse set of products. Each tool or analytic process on the network must work in concert—from orchestrators that direct network security actions to the collectors that feed them data, from Security Information and Event Management tools (SIEMs) that detect attacks to the firewalls that perform commands to block malicious activity, from the routers and switches that form the backbone of the network to the boundary devices that protect them. Each endpoint needs to be a part of a fully automated solution. And the scope of the desired solution—automating network security functions to the fullest extent feasible—is enormous. Currently, network endpoints, tools, and analytics do not interoperate sufficiently within available security automation solutions to truly automate network security functions.
Interoperable solutions for this broad problem space can be achieved only via standards. The IETF Security Automation and Continuous Monitoring (SACM) Working Group has been working to properly scope the security automation problem. This includes identifying what posture information is critical to network security and standardizing how to collect and share that data with the evaluation tools and analytics that can use it to improve their ability to detect and respond to attacks. To make the broad scope of security automation standardization more manageable, it must be broken into a prioritized set of functional security automation tasks.
SACM has chosen vulnerability assessment as the first network security task to automate. The SACM Vulnerability Assessment Scenario1 describes how an enterprise can evaluate its susceptibility to an announced vulnerability. There are many reasons why vulnerability assessment is a good first choice for a security automation use case.
- It is a critical network security task—the rise of “named vulnerabilities” such as Heartbleed and Shellshock is indicative of that.
- It is a difficult task to perform without a large expenditure of resources.
- The effects of poor vulnerability assessment echo on over time. For example, years after Heartbleed was announced, there are still unpatched SSL implementations threatening network security.
- Vulnerability assessment is certainly a challenge for enterprise networks, but it could be easily automated if evaluators are provided with the right set of data.
Pursuing vulnerability assessment as the first automation use case has the added benefit that it forces us to address the fundamental problem of knowing what is on our network. It is impossible to assess a network’s vulnerability without knowing the composition of the network. Specifically, the evaluator needs to know what endpoints are on the network and what software they are running—knowledge supported by sound hardware and software asset management practices. Hardware and software asset management underpin vulnerability assessment, as well as configuration management, threat detection, malware analysis, and a host of other automatable network security functions.
While it is obvious that network operators must continuously monitor hardware and software assets on the network, how this monitoring is architected is vitally important. To get this data to the evaluators that need it, it must be collected in a well-known, structured format. For the security of the endpoints that network operators are monitoring, the solution requires authenticated and encrypted protocols. Data must be provided as timely as possible when monitored information changes in order to eliminate periodic network scans that could be leveraged for an attack. For scalability, the solution must be flexible and lightweight. And, to support future security automation use cases, the solution must be extensible.
The IETF has standardized the Network Endpoint Assessment (NEA) architecture2, so a solution that meets these requirements already exists for client machines. Using the PT-TLS protocol3, any endpoint connected to the network can communicate posture information to a compliance server, including endpoint identity. SACM, building upon work from the Trusted Computing Working Group, has added to these protocols, and specified how to communicate software identification data over NEA in the Software Inventory Message and Attributes for PA-TNC (SWIMA) draft specification4.
SWIMA is an instantiation of an NEA collector that can monitor endpoint software inventory and push reports to a compliance server. Using SWIMA over PT-TLS enables collection of endpoint identity and software inventory in advance of a vulnerability being announced. It uses software identity (SWID) tags, an ISO/IEC standard, as a data model, enabling software vendors and owners to develop unique XML representations of a software’s identity. The compliance server can store this data for future reference in a data repository.
Figure 1. Pre-collection of Endpoint Software Inventory Information
Some types of endpoints will not be able to support the client software required by NEA. Others, particularly network devices, already support endpoint type-specific protocols that are designed specifically for generating posture reports. It is impossible to have good network hygiene without posture reports from network devices. An IETF Mailing List, Posture Assessment through Network Information Collection (PANIC)5 is exploring solutions for network device posture collection, particularly YANG models that could provide the right information to network defenders about the posture of their network devices. These models, communicated over NETCONF and leveraging the draft YANG Push specification6, may help maintain an up-to-date view of the state of the network.
Figure 2. Pre-collection Network Device Software Inventory Information
With a robust collection of endpoint software inventory data now available at the data repository, enterprise security tools with appropriate authorization can access and make use of this data. While the vulnerability assessment scenario focuses on using software inventory data for a very specific use case, it is easy to envision this data being valuable to any number of network security tools—asset management tools, behavioral analytics, and even threat detection tools can improve their outputs with access to accurate, up-to-date software inventory information. Each of these evaluators will query the data repository for the data they require. To extend our vulnerability assessment example, a vulnerability evaluator will query the data repository for endpoints that have the vulnerable software installed, leveraging the hardware and software asset management data collected from the network endpoints. This data can be leveraged by other network evaluators, creating shared situational awareness of the network’s posture.
Figure 3. Evaluators Query the Data Store
In addition to understanding network posture, enterprises need a shared situational awareness of current threats to the network. Situational awareness may take many forms, depending on the use case being addressed. For vulnerability assessment, the vulnerability evaluation tool must know what vulnerabilities it is searching for. The SACM Vulnerability Assessment draft defines a vulnerability data repository that can provide information on weaknesses that could compromise network security. This is a content repository that should be accessed by evaluators to help define the criteria against which they perform their evaluation. Such a content repository could be implemented using the IETF Managed Incident Lightweight Exchange (MILE) Resource Oriented Lightweight Information Exchange (ROLIE) draft specification7. ROLIE builds off of ATOM Publishing Protocol8 to share software, vulnerability, cyber threat intelligence, configuration checklist, and other security automation information in a scalable way. Vendors, security researchers, and network management personnel can stand up ROLIE repositories to ensure that their evaluators have the most up-to-date security information possible.
Figure 4. Content Repository Shares Security Data
The SACM Working Group will demonstrate how the SWIMA, NEA, and ROLIE standards can meet the Vulnerability Assessment Scenario requirements at the Hackathon prior to IETF 99 in Prague. We invite interested parties to join us on the SACM mailing list9. Those interested in security automation for network equipment are welcome on the PANIC mailing list10, and those interested in content repository work are welcome on the MILE mailing list11.