What would it take to adopt HTTP/2 and HTTP/3?
Differences, benefits, application refactoring, and infrastructure changes required to upgrade and improve app performance.
All of us in tech understand that there are deeper margins to be accessed on the performance side of things. No matter how well the product we might have built, we always strive for more speed, efficiency, and reliability. Less system utilization means lesser infrastructure bills to pay, which translates to more profits.
We develop applications to reach maximum users, and the most fundamental part of this is the communication over the network. Overtime, the apps have become highly interconnected, leveraging various networking protocols, architecture patterns, and cutting edge technologies to improve user experience and deliver great results.
Applications typically talk to the “backend server“ via APIs. APIs have been developed on the backend using various programming languages, which implement HTTP protocol support in their source code or libraries. As developers we don’t always think about it in deeper sense. For us, working API means using the http package. At times, we don’t even bother about the HTTP version - 1.1, 2, or 3.
I think it is important to understand the Hypertext Transfer Protocol (HTTP) a bit better and the improvements that it has undergone in the last decade.
HTTP/1.1 - introduced in 1997, laid the foundation as a text-based protocol relying on TCP for communication. When a request is made using HTTP/1.1, it is processed sequentially, often leading to delays when multiple resources are needed, a problem worsened by its lack of header compression and limited ability to handle parallel requests efficiently. Despite these constraints, its simplicity and widespread adoption have kept it relevant.
HTTP/2, finalized in 2015, marked a significant improvement where it adopted a binary format and introduced multiplexing. HTTP/2 allows multiple requests and responses to travel over a single connection simultaneously, reducing latency. It also compresses headers to cut bandwidth use and offers server push, enabling proactive resource delivery. However, its reliance on TCP can still cause delays if packets are lost, as the protocol waits for retransmission, which led to HTTP/3.
HTTP/3, which was standardized in 2022, builds on HTTP/2’s strengths but uses to QUIC (Quick UDP Internet Connections) protocol that runs over UDP. This change removes the bottlenecks introduced by TCP such as - head-of-line blocking, by allowing independent streams to proceed without delay. QUIC also integrates, and in fact mandates, TLS 1.3 encryption. This results in sped up secure-by-default connections with zero-round-trip resuming, making HTTP/3 ideal for modern, high-latency environments like mobile networks. While retaining HTTP/2’s multiplexing and compression, it adds resilience to network changes, such as switching from Wi-Fi to cellular.
This information leads us to ask questions like - how does it matter? what does it take for our applications to make use of HTTP/2 or HTTP/3? Or are they already using it? Does this involve code refactoring? or turning on the infrastructure support is enough? Does it increase or decrease cost? Is the current infrastructure supported?
We can approach these questions in 2 parts - application and infrastructure.
Application - also represent the programming effort and what changes we may need.
Optimization for multiplexing - When working with HTTP/1.1, we often bundle static assets to cater to sequential nature of communication. With HTTP/2 and HTTP3, we can leverage multiplexing and parallel delivery.
Server push - HTTP/2, you can take advantage of server push function to send certain critical data to client even before they request for it. Think notifications.
Fallback mechanisms - if you decide to build support for HTTP/3, consider building fallback mechanisms for HTTP/2 and HTTP/1.1. Mainly because, HTTP/3 is still quite new and end user devices might lack support.
Minimize header overhead - HTTP/2 and HTTP/3 improves performance by compressing headers. This opens 2 ways - minimize headers and further improve performance (should do anyway), or add new header to drastically imrove UX/feature without worrying much about overhead.
Handle blocking operations - HTTP/3 leverages UDP, which is great for unreliable networks. But this can also mean that tightly coupled code can misbehave.
Infrastructure - servers, functions, clusters, gateways, etc.
Protocol support on servers - Some cloud providers might need you to explicitly enable support for HTTP/2 and HTTP/3 protocols. If you are using reverse proxy like NGINX, make sure to configure the same.
TLS Configuration - HTTP/2 need TLS1.2 and HTTP/3 mandates TLS1.3. Using strong ciphers, configuring load balancers, and creating custom mechanism where required might be needed.
Network config - Use various tools to monitor network performance for throughput, latency. You might need to change firewall rules to enable UDP protocol, and open up secure ports.
CDN with HTTP/3 support - Offloading the protocol complexity (handshakes, etc.) to CDN reduces the processing workload on the application. CDNs can also help in routing traffic to appropriate backend that supports appropriate protocols.
Keeping an eye on the latest updates on the development and adoption of these improvements will always be helpful.
Blog updates
This week I published 2 more articles around gRPC.
gRPC Communication Patterns
One of the advantages gRPC offers over REST based services is the streaming bi-directional communication. Traditional implementations which depend on REST APIs often implement Web Sockets to enable real bi-directional streaming of data packets. I said real because it is still possible to simulate streaming behavior using REST, which of course is not performant, and makes little sense.
gRPC in Kubernetes
I have been experimenting with gRPC for some time now. I wrote some articles to cover the basics like What is gRPC? SSL/TLS Auth in gRPC, and communication patterns used in gRPC. In these topics I went through some of the advantages of gRPC over traditional REST API for inter-service communication – especially in a distributed architecture which led me to wonder about how gRPC works in Kubernetes environment. The crux is – gRPC offers great performance using Protobuf and natively supports uni and bi directional streaming.