ngrok is also now your Kubernetes ingress

We've always believed that doing networking the right way should also be the easy way.

That means you should:

  1. Let your people work on their core competencies, not smashing together DDoS protection, NAT, WAFs, VPC tunnels, and load balancing just to get any kind of ingress working.
  2. Implement as much as possible on a global network, which creates the best possible experience for you—think scale and reliability—and your users in always-low latency.
  3. Offload everything you need to hit those first two goals to folks who know how to collapse all that complexity into a single, simple interface (that's us!).

We've been helping folks reach these standards with their apps and APIs for a decade, but what about the coming of cloud native? Once you add Kubernetes into the mix, even just the ingress part of networking gets really hard really fast. To quote one of our solutions architects, Shub, the path toward production-ready K8s ingress leads too many well-intentioned people down one of many wrong paths:

"They are just choosing struggle."

How ngrok makes Kubernetes ingress struggle-free

To make K8s networking both right and easy, we've had to transform our open-source Kubernetes Operator a few times over to meet those three requirements. To match those principles from above, we knew it must:

  1. Fully support any Ingress or Gateway API implementation, while also helping you do far more than basic routing and traffic management (in ways that are also both intuitive and flexible).
  2. Automatically transform those ingress implementations into ngrok resources so routing, authentication, transformation, and any other processing happens entirely on our network of distributed Points of Presence.
  3. Let you stand up all the other networking layers, like load balancing and DDoS protection, with that single interface—and also replace others (think VPNs) with new features.

That's just part of the story, but it's also been a solid year since we've taken a bird's eye look at what's new and improved. Here's the TL;DR on ngrok and the Kubernetes Operator, complete with helpful links to help you jump into the topic cluster you're most interested in:

  • Endpoints and the Traffic Policy engine are the new core primitives that help you manage ingress to your Kubernetes services.
  • Our Operator features the most intuitive and flexible implementation of the Gateway API.
  • Our custom resources (CRs) make getting started easy, while also automatically supporting your existing Kubernetes ingress just by changing an ingressClassName (Ingress) or controllerName (Gateway API).
  • Kubernetes bindings let you create endpoints that are only addressable by your Kubernetes clusters, opening you up to use cases like cross-cluster communication and projecting local services into remote clusters.
  • v1.0 of the Operator is on the horizon, including the deprecation of Edges.

We're not yet done, but scroll down if you'd like to see how far we've come in the last year.

Endpoints enter the Kubernetes world

Endpoints bridge the gap between your services and the traffic that needs to reach them. That gap usually takes a ton of complex networking, but we collapse it down into a single URL and the ports your services listen on.

The ngrok Kubernetes Operator can now create both cloud and agent endpoints, which in turn support HTTP, TCP, and TLS traffic no matter if you're using Ingress, Gateway API, or our resources directly. You can use the Operator to create a public URL for a specific Kubernetes service, much like you might already use the ngrok CLI for sharing a local service on localhost:8080 with a friend, or you can use a cloud endpoint to act as your gateway to any number of apps or APIs.

That's not all:

  • Endpoints can also be bound publicly or internally, making them accessible to anyone or only other endpoints in your ngrok account, respectively.
  • All endpoints created by the Operator with the same URL can be pooled, which lets you run multiple replicas for high availability, even if they're in different clusters or clouds. That works automatically for agent endpoints, too.

Actionable Kubernetes ingress with Traffic Policy

The great thing about endpoints is that any time traffic passes through one, during both the request and response phases, you can manipulate it with our Traffic Policy engine. This is where all the routing, traffic management, and authentication magic happens.

We've written a lot about the many ways you can use Traffic Policy to filter, manage, and orchestrate traffic to apps or APIs:

As a configuration language, based heavily on CEL, Traffic Policy is far more readable and understandable than what you'll find with other ingress and API gateway providers. We're working really hard to strike the perfect balance between being flexible enough to handle 90% of the Kubernetes ingress use cases while also being friendly enough for beginners to pick up quickly.

Part of that usability means being able to attach the NgrokTrafficPolicy resource to your endpoints on:

  • A CloudEndpoint to put all your Kubernetes ingress behind a common set of rules, like authentication and authorization.
  • A single AgentEndpoint that requires unique traffic management, like rate limiting or URL rewrites.
  • The Gateway resource of a Gateway API implementation, which runs the rules for all requests matching any of your listeners.
  • A single HTTPRoute/TCPRoute/TLSRoute resource as an extensionsRef to apply Traffic Policy to only that route.
  • As an annotation to the entire Ingress resource for default policies.
  • As the backend to a single Ingress rule.

What does all this mean?

The way you create endpoints and apply Traffic Policy depends on the kind of Kubernetes ingress you need and which implementation you've chosen, if any.

What's important to us is that the usability doesn't change based on whether you're using our CRs or a Kubernetes standard, and you can still get really flexible with where and how you compose these rules along the request lifecycle. 

Yes, we still sweat all these small things, and when you use these primitives together, you bring yourself, your team, and your infrastructure up to those goals and standards that have guided ngrok since the beginning to:

  • Deploy ingress to Kubernetes in minutes with ngrok resources
  • Write up a single or multicloud API gateway with automatic failover
  • Save cost on expensive resources by routing to multiple clusters from a single cloud endpoint
  • Load balance between multiple replicas of a single Kubernetes service on multiple clusters or clouds.

And those are just the start of what's possible on the Operator as of today.

Take an easier path to Gateway API

Support for and compliance with the Gateway API spec, and the wider cloud native ecosystem, is important to us… because we know that it's important to you too.

With the Operator, we started with support for GatewayClass, Gateway, and HTTPRoute, but we've recently extended our support into TCPRoute and TLSRoute to help those of you with more complex Gateway API implementations use the ngrok Operator without changing your ingress configurations. We also support all the filters, Matching Rules, and weighting configurations you might already have.

If you're already using Gateway API with another ingress controller, you can install the ngrok Kubernetes Operator and edit your existing GatewayClass to point to ngrok instead.

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: ngrok
spec:
  controllerName: ngrok.com/gateway-controller

With your GatewayClass configured, the Operator consumes your Gateway and route resources and translates them into ngrok resources so your Kubernetes ingress automatically works with endpoints and Traffic Policy.

Each hostname in a Gateway resource becomes a cloud endpoint, which uses Traffic Policy to route traffic to any number of internal agent endpoints, which the operator creates for each HTTP, TLS, or TCP route. You can also add Traffic Policy to either Gateway or ...Route resources, giving you far more flexibility and power than Gateway API allows alone.

This automatic translation gives you the same separation of concerns that Gateway API was designed to support:

  • As an infrastructure, DevOps, or platform engineer, you control the Gateway, its hostname, the routing topology, and any Traffic Policy rules you want to apply across your Kubernetes ingress.
  • As an app/API developer, you can then ship new services to the production cluster alongside a new ...Route resource, and get your service up and running instantly—with the flexibility to compose other Traffic Policy rules behind what's already on the Gateway.

Aside from the automatic translation and support for this separation of concerns, our Gateway API implementation is meaningfully more complete that popular ingress controllers and works at a global cloud service. That makes it purpose-built to route traffic from anywhere while collapsing a bunch of other typical requirements, like load balancing and DDoS protection, into the Operator interface. You focus on your Gateways, and we'll focus on all the networking that makes for great Kubernetes ingress.

Get simpler ingress or deeper control with ngrok's K8s resources

As I've mentioned, whenever you use Gateway API or Ingress resources, the Kubernetes Operator automatically translates them into our custom resources: CloudEndpoint and AgentEndpoint.

You can also use these CRs directly if you want a simpler path for ingress into your Kubernetes services. Here's what ingress into a single example API service looks like.

apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: AgentEndpoint
metadata:
  name: service-endpoint
spec:
  url: https://example.com
  upstream:
    url: http://service.default:11434

The simplest Kubernetes ingress? ✅

What if you want to take more control of your Kubernetes ingress and skip past Ingress or Gateway API in favor of creating different shapes of endpoints and Traffic Policy? Maybe a combination of cloud endpoints and agent endpoints for ingress into any number of apps or APIs—using the forward-internal Traffic Policy action, you can route specific paths, like /foo and /bar, directly to the associated services using CEL interpolation.

apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: CloudEndpoint
metadata:
  name: cloud-endpoint
spec:
  url: https://api.example.com
  trafficPolicy:
    policy:
      on_http_request:
        - actions:
            - type: forward-internal
              config:
                url: url: https://${req.url.path.split('/')[1]}.internal
---
apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: AgentEndpoint
metadata:
  name: foo-endpoint
spec:
  url: https://foo.internal
  upstream:
    url: http://foo.default:12345
---
apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: AgentEndpoint
metadata:
  name: bar-endpoint
spec:
  url: https://bar.internal
  upstream:
    url: http://bar.default:23456

The most composable Kubernetes ingress? Also ✅

The same idea applies to our NgrokTrafficPolicy CR, which lets you insert Traffic Policy filters and actions into any number of places to manage traffic in ways that are precise, approachable, and maintainable long-term.

Connect services or entire clusters with bindings

Let's say you want to project a service you're working on locally into a staging cluster to cut out the entire painful waiting time of the standard Kubernetes dev loop...

It's a common pain point, but the solutions typically require you to deploy sidecars for each container or cobble together a VPN solution that makes services "think" they're on the same network. With ngrok, you can extend your Kubernetes ingress with the concept of Kubernetes bindings, which declare that an endpoint is only accessible inside of a Kubernetes cluster where you've installed the Operator with your account's credentials.

In other words: You can connect services to clusters or clusters to clusters without making any of the endpoints publicly addressable.

A Kubernetes-bound endpoint could also be a Kubernetes service: 

apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: AgentEndpoint
metadata:
  name: example-bound-endpoint
spec:
  bindings:
    - kubernetes
  url: http://my-bound-service.my-namespace
  upstream:
    url: http://my-service.my-namespace:80

Or any service:

ngrok http 80 --url http://my-bound-service.my-namespace --binding kubernetes

In either case, other services in your Kubernetes clusters can now communicate with http://my-bound-service.my-namespace as though they're in the same cluster, if your service is actually running on your laptop halfway around the world.

We've also covered the service projection use case in more depth on a separate post.

That covers local development of K8s services and streamlining the dev loop, but what else can you do with bindings?

  • Build a cross-cluster service mesh and extend into multi-region or multicloud
  • Securely connect to external networks, like a customer's database, to power your services
  • Connect Kubernetes services to your existing on-premises applications and APIs
  • Sync clusters across hybrid environments

If you start using ngrok for Kubernetes ingress, bindings are an incredibly powerful value-add—you get site-to-site connectivity without VPNs or other complex networking, all your endpoints remain hidden from the public internet, and it's all backed by the same secure tunneling technology that's been trusted by millions of developers over the last decade.

What can you expect on the road to v1.0 of the Kubernetes Operator?

We just released v0.18.0, which includes, among other fixes and improvements, fresh support for TCPRoute and TLSRoute in your Gateway API implementations.

If you're already using the ngrok Kubernetes Operator, you can upgrade in-place with Helm:

helm repo update
helm upgrade ngrok-operator ngrok/ngrok-operator --reuse-values

This release also includes a new mapping strategy that minimizes the number of endpoints automatically created from Gateway API and Ingress resources, which in turn simplifies your stack and reduces cost—just one example of a small detail that helps you do Kubernetes ingress both the simple and right way.

v0.19.0 is all about polish and introducing our new monthly release cadence for the Kubernetes Operator. v1.0 means honoring the quality, consistency, and stability that you expect with that kind of release, and along the way, tackling a number of projects based on requests we've heard from you all or need to dogfood ngrok on all our clusters. That includes:

  • Deeper Gateway API support, including GRPCRoute
  • Better ergonomics and documentation for Kubernetes-bound endpoints and endpoint selectors
  • Alignment of our API and CRDs around consistent names, kinds, and field structures
  • Cleaner Helm configuration with more intuitive values and field names
  • Better status reporting to give you clear indications in your ngrok dashboard when something is failing and why
  • Improved interoperability with community tools like ExternalDNS
  • Automatic domain reservation and deletion

That's what our roadmap looks like right now. Something you think we're missing? Let us know! The Kubernetes Operator is an open-source project on GitHub (ngrok/ngrok-operator), and we'd love to see you drop an issue for must-have features or ideas on making the ngrok way of Kubernetes ingress even better.

Prepare for the impending edge of Edges

One of those long-term changes is that we're deprecating edges and modules and will remove them on December 31, 2025. As previous versions of the Operator created edges and modules for managing traffic, you will soon need to make the transition to the new paradigm of endpoints and Traffic Policy.

Two points of good news on that front:

  • As of v0.17.0, the Operator only creates endpoints by default—you need to explicitly enable an annotation to continue to create edges.
  • If you upgrade to v0.18.0 or newer, the Operator will automatically transform any edges into endpoints.

If you're using modules with the k8s.ngrok.com/modules annotation, you'll need to update to Traffic Policy, which lets you take even more action than modules in ways that are also far more intuitive and composable.

Ready to stop 'choosing struggle' with Kubernetes?

The ngrok Kubernetes Operator and its developer docs are ready for action. I recommend you first check out our getting started with the Operator guide, then check out the details on ngrok's custom resources. Next, poke around at all the ways to reuse your existing Gateway API or Ingress implementations with the Operator.

When you're ready to get serious with your production clusters, you can also replace your existing NGINX ingress controller in less than a minute to see what ngrok can do for your Kubernetes ingress—just change  ingressClassName: nginx to ingressClassName: ngrok

Gateway API user? Change your GatewayClass resource to controllerName: ngrok.com/gateway-controller, and flip your Gateway resource to gatewayClassName: ngrok.

Get a feel for how the automatic translation works and play around Traffic Policy, and once you're comfortable, dig into the real fun that is ripping out all your load balancers, NAT, DDoS protection services, and VPC nonsense.

The road to v1.0 of ngrok's Kubernetes ingress is coming soon, but your window to have a say in how it all works is far from closed—send all your product feedback, bug reports, and success stories to the Operator repo or our team at support@ngrok.com

Until then, enjoy Kubernetes ingress done the right and easy way.

Share this post
Joel Hans
Joel Hans is a Senior Developer Educator. Away from blog posts and demo apps, you might find him mountain biking, writing fiction, or digging holes in his yard.
API gateway
Kubernetes Operator
Kubernetes Gateway API
Networking
Company
Kubernetes
Production