We are delighted to announce the launch of ngrok’s always-on Global Server Load Balancer (GSLB). ngrok’s GSLB routes both end user traffic (traffic connecting to ngrok endpoints) and agent traffic automatically to the nearest Point of Presence (PoP) by latency, without requiring any additional configuration. In addition, GSLB handles geo-failover for traffic from our edge to agents deployed across the ngrok Global Network. It enhances resiliency by steering traffic from a PoP that is unavailable to one that is operational.
This is now active by default for all our customers. You can deliver faster and more reliable experiences for your end users. Think of it as the Google Maps or Apple Maps for your network - it automatically determines the fastest path and steers end user and agent traffic along this route.
On top of accelerating performance and increasing resiliency, ngrok’s preconfigured GSLB, a hosted service, does not require any upkeep by IT. It also simplifies application and API delivery and shields origin servers from unwanted traffic thus eliminating operational overhead for IT.
But first, what exactly is a GSLB?
What is a Global Server Load Balancer (aka GSLB)?
Application and web performance is paramount to delivering superior user experiences. According to Forbes, nearly half of users won’t wait longer than 2 seconds for a website to load and businesses with slow-loading websites lose a staggering $2.6 billion in revenue each year.
GSLB addresses slowness in apps by intelligently distributing network traffic across many connected servers located in multiple Points of Presence (PoPs) dispersed around the world. It is used to achieve:
Enhanced application performance: End user traffic is routed to the point of presence with the lowest latency.
Improved resilience and high availability: If an entire PoP becomes unavailable, GSLB re-routes traffic to another PoP that is operational.
How does ngrok’s GSLB accelerate application performance?
ngrok’s GSLB boosts application performance by:
Intelligently routing end user as well as agent traffic to the PoP with the lowest latency
Terminating TLS at ngrok edges that are closest to the end user
Offloading functions that are not core to the application such as load balancing, authentication, resiliency to ngrok PoPs that are closer to end users
Leveraging higher capacity backbone links
Above reference architecture is an illustration of latency-based routing. The end user request from Berlin to example.ngrok.app would first be sent to a DNS server. The DNS Server returns the IP address of the PoP with the lowest latency which in this case is eu-fra-1. Connection is then established between the end user and this PoP.
When the ngrok Global Network receives a request, it determines which PoP provides the lowest latency and routes your request to that PoP. Let’s say you’re an end user in Berlin making a request, ngrok’s Global Network will compare the latency between Berlin and all other points of presence on the network. If the latency is lower between Berlin and Japan, it will route traffic through that PoP. If the latency is lower between Berlin and our PoP in the EU, it will choose this route instead.
This is important because latency between points on the internet can change over time as a result of network connectivity and routing. A request that was routed to US East today, may be better suited to US West tomorrow.
ngrok agents with no region specified on startup will automatically create a connection to the region with the lowest latency. This ensures requests made against that specific agent have the shortest connection time possible. In the example above, the agent deployed in San Francisco first connects to a DNS server. The DNS Server returns the IP address of the PoP with the lowest latency which in this case is us-cal-1. Connection is then established between the agent and this PoP.
How does ngrok’s GSLB improve resilience?
GSLB provides geo-aware load balancing and failover capability both for end user and agent traffic.
ngrok Cloud Edge: Traffic is seamlessly routed from a PoP that is down to one that is operational.
Agent Connections: If the PoP that the agent is connected to goes down, the agent will automatically reconnect to the nearest available PoP and traffic flow won’t be disrupted.
Reference architecture above illustrates support for geo-failover. The end user request from Berlin to example.ngrok.app would first be sent to a DNS server. The DNS Server returns the IP address of the PoP that is available, which in this case is eu-fra-1. Connection is then established between the end user and this PoP.
What sets ngrok’s GSLB apart?
Beyond improving application performance and resilience, ngrok’s GSLB eliminates the operational burden that IT has to grapple with when deploying and managing appliance-based GSLBs in the following manner:
Hosted GSLB: ngrok’s GSLB is a cloud-based solution that equips enterprises to achieve operational agility and instant scalability. Unlike appliance based GSLBs, there is no need for provisioning, configuration and ongoing maintenance.
Out-of-the-box Zero Configuration GSLB: ngrok’s GSLB is enabled by default and abstracts away the complexity of managing a GSLB. Optimal regions are automatically chosen by ngrok without any input by Developers, Platform Engineering or Network Admins eliminating provisioning and configuration steps.
Origin server protection: Security policies are enforced at the edge. As a result, unauthorized requests are instantly blocked at the edge and never reach the origin network. Only valid requests are sent to your services. This functionality is supported regardless of how you use ngrok: agent, embedded SDKs, Kubernetes Ingress Controller, or ssh. There is no need for IT & SecOps teams to create their own DMZ. Developers don’t have to spend time writing code directly into the application for preventing unauthenticated/unwanted requests and instead focus on core business logic.
Reducing complexity of application delivery: With the introduction of ngrok’s GSLB, we continue to deliver on our mission to unify various networking primitives that are used for application delivery and eliminate tool sprawl. You can now use one solution - ngrok - as your GSLB, firewall and reverse proxy to serve apps and APIs across every stage of development - from pre-launch test/dev to production environments.
Instant availability of your new locations: ngrok’s GSLB gets rid of the overhead involved in orchestrating the GSLB network when new locations are added or configuration changes are made. When a service is added in a new region, it instantly becomes available across ngrok’s Global Network. Any configuration changes to the Edge also become available without delay so changes take effect immediately. For instance, if authentication is added, traffic gets authenticated at the ngrok Edge across our Global Network.
Designed to serve production workloads
ngrok is already being employed to serve production-grade applications. GSLB is an important innovation milestone that strengthens our platform so that you can deliver faster, more resilient applications and APIs out-of-the-box without requiring any deployment and ongoing maintenance leading to increased engagement and customer satisfaction.
Start using ngrok's GSLB for delivering apps and APIs in production. If you don’t yet have an account, you can sign up for an account here. Let us know if you run into any problems or have any questions. You can reach us on Twitter, the ngrok community on Slack or at firstname.lastname@example.org.
Niji is a Senior Product Manager at ngrok helping shape ngrok user experience. Previously product at Kong and Postman. Outside of work Niji is an amateur pasta chef, early-stage investor, writer and open-source developer.