HTTP/2 Security Implications

7 minute read
July 28, 2016

HTTP/2 is a major revision of HTTP protocol. RFC of HTTP/2 was published in May 2015. Currently most client-server communications is done via HTTP/1.1. In practice, HTTP is de facto transport layer for almost all application-level network traffic - even for non-web-applications. HTTP/2 specifies some fundamental changes to the protocol. Hence, once HTTP/2 will be widely adopted, these changes may reflect on all software development, and also security which is in focus in this article. This is not a low-level protocol analysis, but a high-level overview of practical security concerns related to HTTP/2.

Basics of HTTP/2

HTTP/2 is compatible with HTTP/1.1 in such a way that requests and responses can be losslessly transformed from HTTP/1.1 to HTTP/2 and vice versa. This is necessary, since it is not realistic to expect that the whole web would suddenly switch to HTTP/2. The most important differences and new features compared to HTTP/1.1 are listed below, links pointing to HTTP/2 FAQ:

These changes do improve performance a lot, especially on large web pages which initiate dozens of requests to several different origins. There is a really impressive demo at (requires a browser with HTTP/2 support). Ultimately these improvements result in better user experience and more efficient resource usage - in servers, clients and network. No need to open multiple TCP connections, no head-of-line blocking and less redundancy in requests and responses.

Security implications and concerns

Theoretically, HTTP/2 does not affect in fundamentals of web applications. Basic application-level security features such as cookies, HTTP Basic Auth and same-origin policy remain same as usual.

Instead, HTTP/2 introduces many new features and does affect on how the traffic is transferred on wire. These new features and other changes on the protocol may have some countermeasures on security of web servers, browsers and interoperability with network intermediaries such as proxies, firewalls, IDS systems and so on. These matters are discussed in the following sections.


During the specification phase of HTTP/2, there was a lot of discussion about “mandatory encryption”. Should HTTP/2 be provided only over TLS or not?

Eventually the working group decided that according to RFC, HTTP/2 does not require encryption. However, industry disagreed and currently all major web browsers with support for HTTP/2 require TLS for HTTP/2 traffic. This is a good example on how IETF working groups and other standardization organizations do not actually make the decisions: ultimately, the decisions on how standards and RFCs are implemented in practice, are made by major manufacturers and vendors.

I am not sure whether I support mandatory encryption of all HTTP traffic. While it seems obvious that HTTPS should be the default and eventually the only way to host web services, some special scenarios exists when TLS may impractical or pointless, and generate unnecessary overhead:

  • Hosting publicly available files, which are digitally signed and verified on the client. For example Windows Updates, which can be downloaded via plain HTTP connection, but the signature of each packet is verified before installing..
  • TLS does not provide any extra value, if the connection is encrypted on lower level on the network layer, and HTTP/2 is used internally inside a trusted network. For example access to a web application via SSH tunnel or in locahost.
  • Embedded devices, such as “home routers” and other physical boxes that provide web-based user interfaces. Managing and renewing certificates for these devices becomes complicated, if not impossible.

HTTP/2 specification defines an HTTPS profile that is required. This means, that client must support TLS 1.2, and modern, safe cipher suites. I think these both restrictions are good; by now all web users should have a client with support for TLS 1.2, and compatibility is no longer an excuse to support poor encryption algorithms.

When discussing about TLS, people tend to focus encryption (confidentiality). But in case of public web services which are sharing public information - weather, news and so on - integrity is actually what mattes from end-user’s point of view. TLS is not only encryption - it also provides integrity, given that the remote service is not compromised. So theoretically, in some services it could be perfectly acceptable to use TLS with NULL cipher (essentially without encryption), and certificate-based authentication to identify the server and verify integrity of the data. Strict, predefined TLS profile prevents this kind of usage of TLS. Obviously, the IETF HTTP Working Group didn’t consider a few saved CPU cycles an adequate reason to allow NULL ciphers.

Interoperability with existing solutions

Binary protocol and compressed headers mean that HTTP/2 will probably break most WAFs (web application firewalls), IDP/IDS systems, reverse proxies and possibly some other network intermediaries. Of course, support for HTTP/2 will be eventually added to existing solutions, and we may see new WAF/IDS solutions targeted for HTTP/2. But during the transition period, network and server administrators should expect that all existing security solutions and products may not be compatible with HTTP/2.


HTTP/2 is more complex protocol than HTTP/1.1. More complexity means more lines of code and more places to screw things up. This may happen either with implementations, architecture or configuration. While there are no known fundamental security problems with the specification itself, support for HTTP/2 will definitely add complexity to browsers, servers and web services.

A very useful piece of information about complexity of HTTP/2 (and how developers see it) is available here.

Attack surface

New features do not only mean more complexity, but also more attack surface.

HTTP/1.1 is a simple request-response based protocol, where each request and response is sent in strict order within TCP socket. In contrast, HTTP/2 connections are handled as bidirectional streams where multiple requests and responses can be sent simultaneously. This naturally opens up new ways to break things: messing up with flow control or stream states.

Also, there is this new feature “server push”. I admit that I am not really familiar on how it works in practice. But my immediate reaction is: could it be used for something adverse? Spreading malicious files, bypassing browser security features, breaking web-application’s business logic?

Header compression combined with TLS is another interesting point. CRIME is a well-known attack against HTTPS and many other protocols, where compressed data is encrypted. HPACK is a compression format to be used with HTTP/2, which should be resistant to CRIME. I am not a cryptoanalyst, and I have no time nor competence to evaluate whether HPACK is “safe enough” - but that is another starting point for attackers.

Potentially immature implementations

HTTP is an old protocol - as old as the web. The latest RFC of HTTP/1.1 is from 1997. Thus, server and client implementions have been evolved. Many critical bugs and implementation failures have been resolved from existing software. Developers are familiar with the protocol and they know how to deal with it. The protocol have been analyzed probably thousands of times.

HTTP/2 is a new protocol. It is not as excessively analyzed; there may be pitfalls that the developers are not aware of. I still consider HTTP/2 somewhat “bleeding edge” technology. Even though the protocol would be fine, can we be certain that the implementations are fine?

Debugging and auditing

HTTP/2 is a binary protocol with TLS practically required. Unlike HTTP/1.1, it can not be used with telnet or netcat. The traffic can not be sniffed with tcpdump (well, of course it can, but understanding and analyzing the traffic requires additional software). Wireshark luckily supports HTTP/2.

Also, security analysis tools such as Portswigger’s BurpSuite or OWASP’s Zed Attack Proxy do not support HTTP/2 (although Portswigger is planning to support HTTP/2 in BurpSuite). This makes security auditing or pentesting of HTTP/2 applications more difficult.

This may appear out-of-scope or irrelevant point, but it is not completely irrelevant. Of course, it is not protocol’s fault that proper tools for analysis or debugging are missing or under development. And it is pretty certain that most debugging and web audit tools will eventually support HTTP/2. But meanwhile, maintaining quality of HTTP/2 applications may be a bit more difficult.


Although HTTP/2 should not affect existing web applications and services, it is good to notice that we are dealing with a relatively new technology. From business point of view, every new technology is a risk to deal with. At least, HTTP/2 provides a a lot of attack surface and potential places to start fuzzing.

However, HTTP/2 is really good improvement to HTTP/1.1 in sense of performance, resource usage and optimization. Also, it requires a secure TLS profile which is good - screwing up with TLS configuration would not be that critical any more.

After a brief review and experimenting with HTTP/2, I expect that it will be widely adopted much faster than for example IPv6, which has been “right around the corner” more than 10 years. Also, from security point of view, I don’t see any blockers in HTTP/2. New techology is always a new technology, some backlashes may occur in initial phase, but they will be addressed. In overall, I am hopeful with HTTP/2 that it will eventually make web a safer place.

Further resources


Leave a Comment