In my last post I talked (briefly) about Protocol Anomaly Detection (PAD) and Signatures. My original intention was to talk about other topics, but given that the security aspect has been so popular, I'm going talk more about the security aspects and (quickly) circle back to optimisation and performance management later.
Obfuscation
Like any security inspection technology, there are limitations and trade-offs to Deep Protocol Inspection (DPI). DPI can only make security decisions on visible traffic; and that is becoming an increasingly common problem. Encrypting traffic with SSL or TLS was traditionally reserved for banking applications and the like, but the exploits of the NSA has lead many organisations to adopt a “encrypt everything” approach, including simple web traffic. Applying PAD or Protocol Signature checks to network flows after they have been encrypted is difficult, but not impossible. The size of the problem it is influenced by the direction of traffic you are trying to inspect. For example:
- To inspect your organisation's inbound traffic (e.g. heading towards a public web server). For a server under your administrative control you should have access to the private key used for encryption. In this case, the firewall (or other network security device) can decrypt the traffic flows on the fly and make a decision based on the payload. Alternatively, if you have server load balancer that supports SSL offload (and can’t think of any that don’t), you may choose to inspect the traffic after it has been successfully decrypted.
- Inspecting traffic as it leaves your network (e.g. headed to a 3rd party website) is trickier. When the firewall sees a client-to-server SSL session being established (after the TCP 3-Way), the firewall takes over and acts as a “man in the middle” (MITM). The client establishes an SSL session to the firewall, and in turn the firewall creates a separate session to the outside server. This allows the firewall (or other secure-proxy device) to inspect traffic transiting the network looking for either malicious or undesirable traffic (such as credit card numbers heading out to the network). The drawback is that this method totally breaks the browser trust model (well breaks it even more). When the SSL session is set up, the server proves its identity by sending a public key; a certificate counter-signed by a Certificate Authority known (or trusted) by the browser. This mechanism is designed to prevent MITM attacks from happening. However, when the firewall intercepts the session, the user’s browser would spot the mismatch between the identity of the real server and the one provided by the Firewall. Encryption will still work, but the user will get a very visible and very necessary “nag” screen. The trick to preventing the “nag”, is to mess with the trust model further by creating a “wildcard” certificate. This allows the firewall to impersonate any SSL enabled site. For this to work, the CA key for this bogus certificate is placed in the “Trusted Root CA Authorities” list on any device that needs to connect through your network inspection point (firewall, web proxy, etc.). If your network consists of Windows 2000->8.x domain-member workstations this is about 5 minutes work. If you have almost anything else connected, it can be a significant logistical exercise. In fact, a significant part of the challenges around “Bring your own Device” (BYOD) policies are around establishing client-to-network trust; and most of them have to mess about using certificates to do it.
Performance
As has been mentioned by several thread commenters, applying these advanced protocol-level defences have an impact on performance.
- PAD features are relatively lightweight and are often implemented in hardware as they are dealing with a limited selection of parameters. There is only a limited definition of what the protocol-compliant service should be expected to deal with and match traffic as “good” or "bad”. I would expect basic PAD features to be enabled on almost all firewalls.
- Protocol Pattern matching is a bit more difficult. For each packet that comes in, it has to be matched to a potentially huge list of “known bad” stuff before it can classed as good. For common services such as HTTP, there are many thousands of signatures that have to be processed, even when the traffic has been successfully identified. Inescapably, this takes time and processor cycles. A few firewall vendors use specialized ASICS to perform the inspection (fast but costly) but most use conventional x86 processor designs and take the hit. This is why having an appropriately and updated suite of patterns attached to your security policy is critical. Telling a firewall (or other security device) to match against all known vulnerabilities is a fools errand; it is far better to match against the most recent vulnerabilities and the ones that are specific to your environment. For example, inspecting traffic heading to an Apache server looking for IIS vulnerabilities wastes processing resources and increases latency.
- SSL inspection. SSL inspection creates a significant workload on firewall or secure web proxy, as a result many hardware devices use dedicated SSL processors to perform the encrypt/decrypt functions. Any given device has a finite capacity, which makes it critical to decide up-front what, and how much traffic of encrypted you want to inspect.
A “good” firewall is the one that is properly matched to your environment, and properly maintained. All the fancy application identification and protocol security techniques won’t help if under production loads it barfs the first time you turn them on, or you fail to regularly review the your policy.
In my next post and final post, I shall touch on the performance management and troubleshooting aspects of DPI, Thanks for reading and thank you even more to those who participated in the little survey at the end my last post!