HTTP2 – the future of the web is almost here

HTTP is not just some random prefix that you enter before the domain name in your browser’s address bar; it is one of the most important building blocks of the internet. The Hypertext Transfer Protocol (HTTP) is a protocol for the World Wide Web (WWW) designed to facilitate communication between the client, which in most cases is just your browser; and the server. If the web is the warehouse where the information is stored, then HTTP is the protocol that provides the door, so to speak. And now, finally, a brand new way HTTP protocol is on the way.

Know your history

If you want to know why HTTP/2 is such a big deal, you’ll first need to understand something about it’s predecessor – the HTTP protocol. Version 1.0 of this protocol was introduced waaaaay back in 1996, when most of us were still struggling through school almost 20 years ago! This isn’t actually what you’re currently using in your browser though; when you type a URL in your address bar you are using the slightly more modern HTTP/1.1 protocol, which was announced in 1999. This is still more than 15 years ago though, and as I’m sure you’ve noticed the web has evolved quite a bit from the days of flame gifs and geocities. Have a look at how google.com looked back before Y2K destroyed civilization as we know it:

Google browser in 1999

Yep, WWW has evolved so much that even Google is no longer in beta :-) but the protocol that we use all the time to get around is still from the last millenium. If you’re a sucker for nostalgia you can use the Internet Archive Wayback Machine to check out how websites have changed over the years; we certainly crack a smile when we see how the Gavick.com looked back in 2007!

Why make the change?

So we’ve managed to get by for the last twenty years with plain old HTTP, so why do we need to change now? The biggest limitation of HTTP/1.1 is the way the browser can send requests to the server. Every single resource a website has, like individual images, JavaScript files or CSS stylesheets can be requested one at a time per connection, so your browser loses more time than you realize just sitting around waiting on each request. Run a test or two through the Pingdom Website Speed Test, where you can preview server response time for each resource, and you’ll notice that there’s a whole lot of empty time that can be utilized with a more robust protocol.

But surely if you can have one request per connection, then multiple connections will take care of the issue right? Well, here’s where the limitations of the HTTP/1.1 protocol rear their head once again; the official documentation for the HTTP/1.1 protocol only allows for two open connections at a time, so you can grab two separate resources at any given moment. Two may well be better than one, but when the average website creates more than 30 requests it’s clear there’s going to be a bit of a bottleneck. Not that attempts haven’t been made to get around these limitations; browsers try to skip this limits (you can read more on this in this blogpost) and most modern browsers allow you to open a maximum of 6 parallel connections to the same server. However, this solution is inefficient compared to native protocol support for multiple requests, so HTTP/2 is the logical progression required for the web to continue to evolve.

What to expect from HTTP/2

HTTP/2 is, obviously as the name suggests, an upgrade for the HTTP protocol with the main goal of creating a more efficient transfer layer protocol to speed up the web-content delivery process. the basic idea is that rather than opening multiple connections that each provide a single, restricted line of communication, HTTP/2 will just create one connection to the server and then multiplex many requests over that single open secure connection. This means that the browser no longer needs to wait until the existing requests finish before starting new ones, and can instead stream multiple responses at the same time. This means that if the server gets stuck on a request you won’t have to wait for it to resolve before the rest of the page can load.

The second big change in HTTP/2 is a solution called "Server Push" which is quite similar to push methods already used in E-mail clients. This feature allows the server to push some data to your client (browser) before it makes the request, which will result in faster page loads than ever before.

The high performance of the HTTP/2 protocol makes it much easier to work with an encrypted connection (which is not required in official documentation) but both the Firefox and Chrome browsers currently support HTTP/2, but only when using TLS (Transport Layer Security; a successor to SSL) so as a result it should be expected that overall website security will be better off with this new protocol.

Can I use HTTP/2 right now?

The leading browser in HTTP/2 implementation is currently probably Internet Explorer, which has supported this protocol since the Windows 10 Technical Preview. Of course Chrome supported the SPDY protocol before, which was a starting point for the HTTP/2 protocol, but it has been announced that full support will be included in the latest Chrome Canary build announced on 17th March 2015. If you already have a browser with HTTP/2 support then you can explore a few pages already using this protocol like HTTP2Rulez! and compare page-load time – you will really see the difference! It also looks like most optimizations that are already part of front-end developer stuff still need to be reverted, but there will certainly be something new to improve even with a shiny new HTTP/2 protocol.

Front-end optimisations for HTTP/1.1 limitations

For now though, we’re stuck with HTTP/1.1 until HTTP/2 becomes more widespread and supported. In the meantime, let’s look at some simple optimizations you can use to get your website running a bit faster. You’ll notice that most speed-improving techniques focus on skipping the main HTTP drawback which is the amount of parallel requests.

Image sprites replace multiple images with one that contains all the page’s images in one package to avoid multiple requests. CSS combine is a technique where all cascade style sheets are combined into one big CSS file to minify the amount of requests to the server. You can achieve exactly the same result with JavaScript combine and of course all minification techniques allows you to decrease the request size. Cache-Control Headers, which you may add to your .htaccess file defines how your browser should cache particular file types which as a result can be used to effectively decrease the amount of requests on page reload processes. With Content Delivery Networks, otherwise known as CDNs, you put files on different servers to spread the requests between multiple connections and avoid the traditional limitations. Just remember; with the advent of HTTP/2 most of these techniques will no longer be needed, so you can look forward to a bright, high-speed internet future!

HTTP2 – the future of the web is almost here 5.005 (100.00%) 12 votes
Share
This article was first published March 27th, 2015