Introducing Cloud Endpoints

Have you ever wanted an ngrok endpoint that doesn’t go offline when you get disconnected from the internet?

Today, we are excited to introduce Cloud Endpoints: persistent, always-on ngrok endpoints that are managed centrally via the dashboard and API. They let you route and respond to traffic even if your upstream services are offline. Cloud endpoints use ngrok’s new Traffic Policy system to configure traffic handling just like the Agent Endpoints (aka tunnels) that you're familiar with.

Cloud endpoints solve a number of problems for ngrok developers, let’s take a closer look:

  • Always on — Because they are not tied to the lifecycle of an agent process and live persistently until deleted, cloud endpoints let you handle traffic even if your upstream service goes offline. They’re frequently used to render a custom error page or fail over to another service.
  • Centrally managed — We see customers choose cloud endpoints to be the "front door" where they standardize how to handle, authenticate, and route traffic to their apps and APIs. This allows you to create architectures where you treat the agent endpoints (aka tunnels) as ‘dumb pipes’ by moving the smarts to the centrally-managed cloud endpoints.
  • Traffic Policy configuration — Cloud endpoints use the exact same Traffic Policy configuration as agent endpoints so that you can transition between them with just a simple copy/paste. You only need to learn a single configuration language because you can use it with every endpoint.
  • API automation — The Endpoints API resource can be used to automate management of your cloud endpoints. You can automate cloud endpoint creation via the ngrok API, API client libraries, and Kubernetes Operator.  
  • Replacement for Edges — Cloud endpoints deprecate Edges. They are more flexible and easier to work with. See the guide on how to migrate off of Edges to cloud endpoints. Edges will continue to work while we work to migrate everyone off.
  • Fully integrated — Cloud endpoints are a first-class feature of the ngrok platform which means that just like agent endpoints:

Cloud endpoints are available today to users on our free and pay-as-you-go plans. You can read the cloud endpoints documentation to get into the nitty-gritty details about how they work.

How to create a cloud endpoint 

Once you've reserved a domain on ngrok, you can create a cloud endpoint on the ngrok dashboard or via API. For the example below, we’re going to use the API via the ngrok agent CLI (you may need to ngrok update first!).

Creating a cloud endpoint is a single API call where you specify the endpoint’s URL and its Traffic Policy:

ngrok api endpoints create \
  --api-key {YOUR_API_KEY} \
  --type cloud \
  --url https://inconshreveable.ngrok.app \
  --traffic-policy ‘‘{"on_http_request":[{"actions":[{"type":"custom-response","config":{"status_code":200,"content":"hello world from my new cloud endpoint"}}]}]}’


Now let’s try it out:

$ curl https://inconshreveable.ngrok.app
> hello world from my new cloud endpoint


Easy. You’ve got a cloud endpoint online serving requests! Now that we know how to create a cloud endpoint, let’s take a deeper look into what you’ll use them for.

Cloud endpoints as your 'front door'

Combining Cloud Endpoints with Agent Endpoints gives you, as a developer, full autonomy over when and where your services become accessible.

For instance, your friendly Ops team can create a public cloud endpoint, such as api.example.com, and configure JWT validation (with the help of Auth0!) to authenticate and authorize client requests before they reach your internal service. Meanwhile, you keep building critical functionality, such as pricing, on an agent endpoint with an internal binding like api-pricing.example.internal. When ready, Ops can enable public API access via api.example.com/pricing and route to api-pricing.example.internal using the forward-internal action. 

When client requests hit api.example.com/pricing, ngrok forwards them to your agent endpoint (api-pricing.example.internal). This setup lets you ship fast, eliminating the headaches that come from filing tickets for Ops.

Here is the Traffic Policy snippet that makes this possible:

on_http_response:
  - actions:
      - type: jwt-validation
        config:
          issuer:
          allow_list:
             - value: https://<AUTHO_TENANT>.us.auth0.com/
          audience:
            allow_list:
              - value: https://api.example.com
          http:
            tokens:
              - type: jwt
                method: header
                name: Authorization
                prefix: 'Bearer '
          jws:
            allowed_algorithms:
              - RS256
            keys:
              sources:
                additional_jkus:
                  - https://<AUTHO_TENANT>.us.auth0.com/.well-known/jwks.json
  - name: Route /pricing/* to new internal agent endpoint
    expressions:
      - req.url.path.startsWith('/pricing')
    actions:
      - type: forward-internal
        config:
          url: https://api-pricing.example.internal
  - name: Route all other traffic to existing internal agent endpoint
    actions:
      - type: forward-internal
        config:
          url: https://api.example.internal


To dig deeper on how to do set up the routing that makes Ops control and developer self-service possible, check out a few of our resources:

Show a custom error page if your app is offline

We all have bad days. Services crash. Fixes take longer than you'd like. Users first start wondering what's wrong, then start reaching out to support.

You might currently bring your service online with a public agent endpoint, which is what ngrok creates when you run ngrok http 8080 --url https://example.com . If that upstream service at port 8080 crashes, requests will fail silently, or maybe worse, confusingly.

Cloud endpoints helps you deliver an informative error page without you having to host more web content on your own infra.

Instead, you can:

  1. Create a cloud endpoint with a Traffic Policy that includes a forward-internal action to your agent endpoint.
  2. Update your Traffic Policy to use the custom-response Traffic Policy action when the forward-internal action fails.
  3. Convert your public agent endpoint to an internal agent endpoint: ngrok http 8080 --url https://your-agent-endpoint.internal .

The Traffic Policy example will look like this:

on_http_request:
  - actions:
      - type: forward-internal
        config:
          url: https://your-agent-endpoint.internal
          on_error: continue
      - type: custom-response
        config:
          status_code: 503
          content: |
            <!DOCTYPE html>
            <html>
            <body>
              <h1>Service Temporarily Unavailable</h1>
              <p>We apologize, but our service is currently offline. Please try again later.</p>
            </body>
            </html>
          headers:
            content-type: text/html


Again, you don’t have to host a specific service or webpage for your error messages—just use ngrok’s Traffic Policy to serve up static content, and make it your own with HTML.

ngrok.com runs on ngrok cloud endpoints

At ngrok, we dogfood everything we ship to customers. We’ve already been using cloud endpoints and find all sorts of uses for them. You’re even accessing one right now!

The https://ngrok.com site is a cloud endpoint itself with a chain of Traffic Policy rules to filter and take action on requests as they hit our network. For one, we block Tor traffic using the custom error page shown just above, add redirects, and route traffic to multiple external services, like our blog, docs, and downloads page.

For example, here’s how we forward ngrok.com/downloads to a Vercel app with the upcoming forward-external Traffic Policy action: 

on_http_request:
  - expressions:
     - req.url.path.startsWith('/downloads') ||
       req.url.path.startsWith('/__manifest')
    actions:
      - type: forward-external
        config:
          url: https://<NGROK_DOWNLOADS_DEPLOY>.vercel.app


Replacement for ngrok Edges

Cloud endpoints may feel familiar if you’ve used Edges before. They replace and deprecate Edges with a primitive that is both simpler and more flexible. They are powered by our expressive Traffic Policy engine that was built with modern traffic routing needs in mind. Cloud endpoints improve on Edges with:

  • Simplified object model: You don’t have to grapple with Tunnel Groups, Backends, Modules, Edge Routes, or labeled tunnels. Everything is now an enndpoint with an associated Traffic Policy. Traffic management becomes more intuitive, reducing the learning curve.
  • Simplified traffic routing: With Traffic Policy, cloud endpoints enable you to route traffic not just by path but also by headers, subdomains, and more. This added flexibility gives you greater control over how traffic flows to your services.
  • Fewer API calls: Setting up a cloud endpoint requires just a single API call, unlike Edges, which involve multiple calls for tunnel groups, edge routes, and modules. This reduces complexity and minimizes the risk of failures during automation.

Want to get off Edges? See the guide on how to migrate off of Edges to cloud endpoints. There is no planned end-of-life date for Edges yet. That will be announced separately with plenty of time to make a transition along with automated tooling to help you migrate.

Wrapping up

To close, we’re pretty pumped up about cloud endpoints and the flexibility they bring to managing your traffic. So excited, that we’re actually using them ourselves. Stay tuned for more in-depth guides on how you can utilize cloud endpoints in your own workflows. Until then, peace.

Share this post
Nijiko Yonskai
Niji is a Principal Product Manager who helps shape ngrok user experience. Previously product at Kong and Postman.
API gateway
Cloud edge
Networking
Company
Gateways
Production
Development