How ngrok’s GSLB elevates Linode-hosted SaaS performance and resilience

July 17, 2024
|
10
min read
Joel Hans

Not long ago, in the hallowed “marketplace of ideas” for the tech community that is Hacker News, the perennial debate about which cloud provider to pick from for SaaS hosting bloomed bountifully once again.

Some folks will always advocate for hyperscalers, which offer products for every possible infrastructure you could want to deploy or feature you tell yourself is on the critical path—albeit with extra complexity and cost. A handful will laugh at the idea your company needs anything more sophisticated than the cheapest VPS you can find and a few Docker containers. A decent contingent will, in all seriousness, tell you to brush off an unused laptop, an Ethernet cable, and an Ubuntu <code>.iso</code>.

Defining functional requirements might be straightforward, but with these available choices, bringing that plan to reality often isn’t nearly so. 

Even as you move up in operational maturity, all these solutions leave you locked in or holding all the cards. The hyperscaler will become phenomenally expensive or difficult to operate, especially if you’re not an operations or DevOps person. Your fleet of cheap VPSs will inevitably slow due to the overcrowding of shared resources, and wiring up any sophisticated network solution falls entirely on you. The homelab solution doesn’t get you anywhere close.

What’s the shortest path toward having your SaaS deployed globally, with manageable costs and without all the operational complexity? How about deploying on a cost-effective and developer-friendly provider like Linode, then easily layering in the performance and availability benefits of global server load balancing (GSLB) with ngrok?

How a GSLB helps you balance beyond Linode (or any other cloud provider)

If you’re <code>/u/edtech_dev</code>, you probably can’t help but notice the number of positive comments about Linode. Their compute VMs are reasonably priced compared to AWS or Azure, they have plenty of global regions to pick from, and have been around for what feels like forever. They’ve built an undeniable degree of trust with developers, and play a large part in the recent popularity surge for alternative clouds—especially when GPUs are involved.

Fast-forward six months or a year, and you’re ready to scale out well beyond your small but scalable single VM into the next level of operational maturity.

You have customers around the globe who demand speed, and you need guarantees that a single misconfiguration or outage won’t take your entire SaaS offline at two in the morning. Maybe you’re even getting ready to add some self-hosted AI magic into your SaaS, which requires far more expensive GPU compute, or want to go straight into a multi-cloud deployment for failover or disaster recovery.

You’ve now reached a point of operational maturity where your non-functional requirements must change fast. If you don’t have these, it doesn’t matter how brilliant your application might be—the end-user experience will suffer, and you will shed customers along the way.

Performance

With a global user base and a single VM, performance heavily depends on location. If you deployed your VM in Toronto’s data center, that will deliver low latency for North America, middling latency for Europe, and pretty dodgy anywhere else. Each hop between request and response, from DNS to HTTPS handshakes, adds up fast.

You can enable Linode’s NodeBalancer product with relative ease, but that only balances load between multiple services within the same data center. That might solve some performance degradation with horizontal scale, but it still does very little for those not located near the data center—data can only travel so fast.

The complexity of your networking and security policies also negatively affects performance. HTTPS is inevitable, but you might also require features like OAuth, circuit breaking, or traffic manipulation. When you execute these policies and actions on your upstream server directly, they sap compute away from your business logic, adding even more delay for your end users with every click, input, and request for data.

A GSLB helps you meet this non-functional requirement by:

  • Automatically routing traffic from the upstream server and end user to the Point of Presence (PoP) with the lowest latency.
  • Offloading compute-heavy functions, like authentication, to PoPs closer to end users for better latency and performance on your upstream server.
  • Leveraging high-capacity connections on the internet backbone between PoPs.

Resilience

A single VM deployment is a single point of failure, but simply deploying in multiple data centers around the globe doesn’t mean you’re protected—at least not in a way that’s automatic or low-maintenance. True high availability requires no intervention, and you should be aiming for a system that “self-heals” effectively even in catastrophic conditions. Think major outage at the internet backbone level or an entire data center going offline indefinitely due to a flood.

A GSLB helps you meet this non-function requirement by:

  • Supporting automatic geo-failover in case any PoP, or even your upstream server, becomes unavailable.
  • Allowing you to deploy the same application on multiple architectures, such as VMs in data centers with fewer customers and a large Kubernetes cluster that handles the bulk of traffic in North America.
  • Giving you the flexibility to go multi-cloud with backup deployments in AWS or Digital Ocean and consistent load balancing.

The ngrok GSLB for Linode and multi-cloud

With these new non-functional requirements, their functional siblings must change, too. They’re not asking much, but do add a ton of new operational complexity:

  1. A few instances of your SaaS deployed to VMs in globally distributed locations, with the option to extend to Kubernetes or even a multi-cloud infrastructure.
  2. A GSLB that distributes load no matter your deployment methods, geographies, or cloud providers.

The challenge here is finding your path to deploying these components in a manageable way—especially if you’re part of a startup or SMB that doesn’t have a dedicated networking or SecOps team to handle all the complexity on your behalf. What does that usually entail? We’re talking DMZs, VPNs, DNS, FQDNs, and other acronyms you wouldn’t be thrilled to deal with.

ngrok’s GSLB helps you find and stick to that path.

  • Zero configuration for all infrastructure: Once you route your upstream service through an ngrok agent, deployed as a standalone service, with the Kubernetes Operator, or directly into your app via an SDK, your traffic is automatically GSLB-enabled and routed to the lowest-latency PoPs without input from platform engineering engineers or network administrators. That comes in handy especially when your organization isn’t (yet) big enough for those roles in the first place, and works instantly whether you have three VMs, 10 VMs and two Kubernetes clusters in different clouds, and any other possible combination.
  • Less complexity for application delivery: All the infrastructure and configuration required to modify how your applications interface with ngrok’s GSLB, like offloading authentication to PoPs, happens into a single, version-controllable YAML/JSON configuration file. ngrok even does you one better—this configuration is also completely environment-agnostic, which means you can develop locally, test on a remote staging server, and push to prod with the same file.
  • Solid security checkpoints: When you offload authentication and security to ngrok’s GSLB, it takes the brunt of incoming attacks, not your upstream servers. All unauthorized requests, even if they come in the form of a massive DDoS attack, are blocked at ngrok’s edge for peace of mind.

  • Built-in observability: ngrok Network Traffic Inspector recently became Generally Available for all customers with live updates, full visibility into the headers and bodies of requests, and up to 90 days of retention—entirely for free. You can now customize and replay any real-world request, especially one that causes a production error or outage, to help with troubleshooting and dealing with third-party APIs.

How would anyone, whether we’re still talking about <code>/u/edtech_dev</code> or your next deployment, best combine the flexibility and trust of Linode’s compute instances with the easy-on performance and reliability of ngrok’s GSLB?

We already have a step-by-step guide that walks you through enabling an ngrok Edge and weighting traffic for high availability, effortless horizontal scale, A/B testing of API/app versions, and much more.

Once you’ve given it a spin, we’d love to hear from you in our new ngrok Community Repo: the best place for all discussions around ngrok, including bug reports and product feedback.

Share this post
Joel Hans
Joel Hans is a Senior Developer Educator. Away from blog posts and demo apps, you might find him mountain biking, writing fiction, or digging holes in his yard.
Partners
Load balancer
Production