Deploying HTTP/3 in Production with Node.js: A 6-Month Performance Review
Introduction: Why We Even Considered HTTP/3
Our public API serves over 10 million mobile clients daily. In late 2025, we noticed that tail latency (p99) for requests from 3G/4G networks was 2-3x higher than wired connections. Deep packet capture analysis revealed frequent TCP retransmissions causing HTTP/2 head-of-line blocking: a single lost packet would stall all streams on a connection. With Node.js 22 LTS (released October 2025) promoting HTTP/3 to stable, we decided to run an experiment.
Understanding HTTP/3 and QUIC
HTTP/3 is the third major version of HTTP, built on QUIC—a transport protocol over UDP. QUIC eliminates head-of-line blocking by providing independent, multiplexed streams within a single connection. It also includes integrated TLS 1.3 encryption and connection migration, allowing a connection to survive IP address changes (critical for mobile). For a detailed specification, see RFC 9114.
Implementation: From Code to Cloud
Node.js Setup
Node.js 22 includes the stable http3 module. We created a server as follows:
import { createHttp3Server } from 'http3';
import { requestHandler } from './app';
const server = createHttp3Server(requestHandler);
server.listen(443, () => {
console.log('HTTP/3 server listening on port 443');
});
We kept our existing HTTP/2 server running on the same port for compatibility. The requestHandler was shared between both.
Load Balancer Configuration
We use AWS Application Load Balancer (ALB). Enabling HTTP/3 requires:
- Setting the
ip_address_typetoipv4(QUIC uses UDP, so no change). - Adding a listener with protocol
HTTP3on port 443. - Updating the security policy to require TLS 1.3 (QUIC mandates it).
For detailed configuration, refer to the AWS documentation on HTTP/3 listeners.
Our Terraform snippet:
resource "aws_lb_listener" "https" {
load_balancer_arn = aws_lb.main.arn
port = '443'
protocol = 'HTTP3'
ssl_policy = 'ELBSecurityPolicy-TLS13-1-2-2021-06'
default_action {
type = 'forward'
target_group_arn = aws_lb_target_group.app.arn
}
}
We also opened UDP port 443 in the security group.
Challenges We Faced
Debugging and Observability
The tooling for HTTP/3 is nascent. Wireshark supports QUIC dissection, but we had to manually decode some frames. Node.js's built-in diagnostics (process._getActiveRequests()) didn't expose HTTP/3-specific info. We contributed a patch to the Node.js project to add http3 metrics to the perf_hooks module. Until that landed, we instrumented our handler with custom timings.
Client Compatibility
Not all clients support HTTP/3. We used the Alt-Svc header to advertise the HTTP/3 endpoint:
Alt-Svc: h3=":443"; ma=86400
Browsers and modern mobile SDKs automatically upgrade. However, some enterprise proxies block UDP, so we implemented a fallback: the ALB terminates HTTP/3 and forwards to our Node.js server over HTTP/2. For clients that don't send the Alt-Svc header or fail the HTTP/3 handshake, we served HTTP/2.
Infrastructure Adjustments
Our CDN (CloudFront) didn't fetch from our origin using HTTP/3 at the time (support came in mid-2025 but had caveats). That wasn't a problem because the ALB handles HTTP/3 from clients, and we use HTTP/2 to our internal services. However, we had to ensure our VPC network allowed UDP traffic to the ALB nodes. Also, we increased the udp_timeout on the ALB from the default 30s to 60s to accommodate longer QUIC handshakes on mobile.
Performance Results: The Good, the Bad, and the Surprising
We used a feature flag to gradually increase traffic to the HTTP/3 endpoint, starting at 1% and monitoring error rates and latency. The rollout took three weeks, after which we routed 100% of traffic through HTTP/3.
Tail Latency Improvements
Using a mobile network emulator (we used tc to introduce 2% packet loss and 200ms RTT), we measured p99 latency:
- Before HTTP/3: 1.2s
- After HTTP/3: 850ms (29% reduction)
For desktop clients on stable networks, the difference was within 5%.
Throughput and Connection Efficiency
HTTP/3's multiplexing reduced the number of connections per client. Our average connections per active user dropped from 3.2 to 1.1. This lowered memory usage on our servers by about 15% because each HTTP/2 connection consumes more kernel memory.
CPU Overhead
QUIC performs encryption in user space, increasing CPU load. Our CPU usage per request rose by ~5%. We mitigated by moving to instances with more cores (from c5.xlarge to c5.2xlarge) and tuning the Node.js thread pool size.
Unexpected Benefits: Connection Migration
Mobile clients switching from WiFi to cellular often experienced connection drops with HTTP/2. With HTTP/3's connection IDs, the connection survived the IP change. Our analytics showed a 40% reduction in connection reset errors during network handoffs.
Is HTTP/3 Ready for Your Production Stack in 2026?
- Consider HTTP/3 if: Your user base includes significant mobile traffic, you see tail latency issues due to packet loss, or you need seamless connection migration.
- Prerequisites: Your load balancer (or reverse proxy) must support HTTP/3. As of 2026, AWS ALB, Google Cloud Load Balancing, and Azure Load Balancer all support it. Ensure security groups allow UDP 443.
- Operational overhead: Debugging is harder. Invest in logging QUIC connection IDs and consider exporting
http3metrics to your monitoring. - Cost: Account for increased CPU usage. Benchmark your specific workload.
Conclusion
Deploying HTTP/3 on our Node.js API was a net win. We achieved a 29% reduction in tail latency for mobile users and improved connection resilience. The Node.js ecosystem provides solid built-in support, but tooling around debugging and metrics still lags. We recommend a gradual rollout, close monitoring, and being prepared to tweak infrastructure. For any high-traffic API with mobile clients, HTTP/3 is no longer experimental—it's a production-ready performance optimization in 2026. Looking ahead, we're exploring gRPC over HTTP/3 for our internal services to further reduce latency.