Skip to main content

HTTP/2 tutorial for beginners

HTTP is the most widely used application layer protocol in the world. Entire Web is running on top of this protocol. HTTP 1.1 was introduced in 1999 and is still the de-facto standard for web communication. With the improvements of the web and the way people interact with the web (mobile devices, laptops, etc..), this protocol has been hacked to provide new functionality. This hacking is no longer stable and the world of internet needed a new protocol version. That is why IETF has developed HTTP/2 protocol to address the challenges which are faced by the web community. You can find the latest draft of this protocol here.

Why we need HTTP/2

  • Early days, bandwidth was the limiting factor. But today, average internet user in US has a bandwidth of 11Mbit/s
  • Latency is the new bandwidth. End users will not worry about the bandwidth as long as they get better responsive applications.
HTTP 0.9 - Initial version of the protocol introduced in 1991. Required a new TCP connection per request. Only GET method is supported. 

HTTP 1.0 - Improved version with POST, HEAD methods for transferring more rich content. New header fields are introduced to identify the request (ex: Content-Length). Still uses connection per request.

HTTP1.1 - New methods like, PUSH, DELETE, OPTIONS added. Keep alive (persistent) connections became the default. Improved latency.

Challenges with HTTP 1.1
When loading web pages with multiple resources, browsers will send parallel requests to reduce the latency. But this needs more resources since it needs to create new connections and latency will be affected. Even though there are hacks like HTTP pipelining by sending multiple requests through the same TCP connection asynchronously. But in this case, server will respond synchronously and reduce the latency and blocks the application if one of the resources slow to respond.

How HTTP/2 address the challenges of HTTP/1.1

The major design goals of the HTTP/2 protocol was to address the issues which were present in the HTTP/1.1 protocol.
  • Reduce the latency
  • Reduce total number of open sockets (TCP connections)
  • Maintain high level compatibility with HTTP/1.1
In addition to addressing these challenges, HTTP/2 has introduced several new features which were not there in the HTTP/1.1 and will improve the performance of the web.

What is new in HTTP/2

  • Multiplexing - Multiple requests can be sent over a single TCP connection asynchronously.
  • Server Push - Server can asynchronously send resources to the client's cache for future use
  • Header compression - Clients will not need to send the same headers for each and every request. Only the new headers required to be sent.
  • Request prioritization - Some requests can have more memory,cpu, bandwidth in the same TCP connection
  • Binary protocol - Data will be transmitted as binary frames. Not as text form in the HTTP/1.1

How HTTP/2 works

  1. Client will send a server upgrade request over HTTP/1.1 and if the server supports HTTP/2, it will respond with 101 response (Switching protocols). Now client can send HTTP/2 requests over the same connection.
  2. Every request and response is given a unique ID (stream ID) and divided into frames. This stream id will be used to identify the relevant frames for a given request/response. Single TCP connection can be used to connect to single origin only.
  3. Stream can have a priority value and according to that, server decides how much memory, cpu and bandwidth needs to be allocated for a request.
  4. SETTINGS frame will be used to apply HTTP level flow control (# of parallel requests for a connection, data transmission rate, # of bytes for a stream)
  5. Header compression make sure that headers are not transmitted redundantly for every request. Both client and server maintains a header table containing the last response and request headers and their values. When sending a new request, it will only send the additional headers.
  6. Server push enables developers to load contained or linked resources in an efficient way. Server will proactively send resources to the client's cache for future use. This is somewhat different from server push concept of Websockets protocol in which server can send events or data to clients at any time even without a request from client. Instead HTTP/2 server push still complies with the request/response pattern.
I would like thank the authors of the following blog posts which I referred when writing this post.


Comments

  1. Great summary! Thanks. Way back in 2011 google released SPDY and HTTP2 took some of the good stuff from SPDY. You can see some of the benefits HTTP2 will bring by watching this: https://www.youtube.com/watch?v=vEYKRhETy4A :)

    ReplyDelete

Post a Comment

Popular posts from this blog

Understanding Threads created in WSO2 ESB

WSO2 ESB is an asynchronous high performing messaging engine which uses Java NIO technology for its internal implementations. You can find more information about the implementation details about the WSO2 ESB’s high performing http transport known as Pass-Through Transport (PTT) from the links given below. [1] http://soatutorials.blogspot.com/2015/05/understanding-wso2-esb-pass-through.html [2] http://wso2.com/library/articles/2013/12/demystifying-wso2-esb-pass-through-transport-part-i/ From this tutorial, I am going to discuss about various threads created when you start the ESB and start processing requests with that. This would help you to troubleshoot critical ESB server issues with the usage of a thread dump. You can monitor the threads created by using a monitoring tool like Jconsole or java mission control (java 1.7.40 upwards). Given below is a list of important threads and their stack traces from an active ESB server.  PassThroughHTTPSSender ( 1 Thread )

WSO2 ESB tuning performance with threads

I have written several blog posts explaining the internal behavior of the ESB and the threads created inside ESB. With this post, I am talking about the effect of threads in the WSO2 ESB and how to tune up threads for optimal performance. You can refer [1] and [2] to understand the threads created within the ESB. [1] http://soatutorials.blogspot.com/2015/05/understanding-threads-created-in-wso2.html [2] http://wso2.com/library/articles/2012/03/importance-performance-wso2-esb-handles-nonobvious/ Within this blog post, I am discussing about the "worker threads" which are used for processing the data within the WSO2 ESB. There are 2 types of worker threads created when you start sending the requests to the server 1) Server Worker/Client Worker Threads 2) Mediator Worker (Synapse-Worker) Threads Server Worker/Client Worker Threads These set of threads will be used to process all the requests/responses coming to the ESB server. ServerWorker Threads will be used to pr

How to configure timeouts in WSO2 ESB to get rid of client timeout errors

WSO2 ESB has defined some configuration parameters which controls the timeout of a particular request which is going out of ESB. In a particular  scneario, your client sends a request to ESB, and then ESB sends a request to another endpoint to serve the request. CLIENT->WSO2 ESB->BACKEND The reason for clients getting timeout is that ESB timeout is larger than client's timeout. This can be solved by either increasing the timeout at client side or by decreasing the timeout in ESB side. In any of the case, you can control the timeout in ESB using the below properties. 1) Global timeout defined in synapse.properties (ESB_HOME\repository\conf\) file. This will decide the maximum time that a callback is waiting in the ESB for a response for a particular request. If ESB does not get any response from Back End, it will drop the message and clears out the call back. This is a global level parameter which affects all the endpoints configured in ESB. synapse.global_timeout_inte