Modernize and secure how you access remote devices

March 27, 2025
|
5
min read
Ishan Jain

As a customer engineer working with complex environments, I often see technicians, engineers, and administrators struggling to connect to remote servers, databases, and applications across multiple sites. The methods they're used to using, like multi-hop Remote Desktop Protocol (RDP) or Secure Shell (SSH) sessions, actually just make their lives harder.

Every time I see it happen, I just want to jump into action and give them the recipe for modernizing it—but first, let's talk about why that's a problem.

Why remote access is such an inefficient and complex process

Traditionally, accessing remote infrastructure requires:

  1. Connecting to a corporate VPN
  2. Requesting access through a PAM solution
  3. Initiating an RDP or SSH session to a jump box
  4. Launching additional RDP or SSH sessions from the jump box to the target system

Which introduces:

  • Latency issues: Each additional session increases response times.
  • Operational complexity: Managing credentials and permissions across multiple systems.
  • Limited scalability: As more sites and users require access, overhead increases.
  • Security concerns: Increased attack surface due to multiple exposed access points.

ngrok's solution: Secure remote access without a VPN

With ngrok, an organization can replace complex, multi-hop access workflows with single-step, secure remote access by leveraging Cloud Endpoints, Internal Endpoints, and the agent API.

Here’s what that might look like:

Instead of deploying an ngrok agent on every individual device, one ngrok agent installed on the network is enough. The agent can route traffic internally to different services or devices, centralized access and secure connectivity—even managing tunnels dynamically with the agent API—without a mess of networking.

Understanding new ngrok primitives: cloud and internal endpoints

An internal endpoint enables a service to be reachable within ngrok, without being publicly exposed.

  • Only cloud endpoints or internal services can route traffic to them.
  • Can’t be accessed directly from the internet.
  • Used for telemetry APIs, databases, and dashboards.

Example: A telemetry API runs on a local server (192.168.1.100:8080). Instead of exposing it publicly, you create an internal endpoint:

url: https://api.internal
upstream:
  url: http://192.168.1.100:8080

Now, this API is only accessible inside ngrok’s private network associated with your account.

A cloud endpoint is a permanent, externally accessible entry point into the factory network.

  • Managed centrally via the ngrok API or dashboard.
  • Does not forward traffic to the agent by default—it must be configured to route traffic to Internal Endpoints.
  • Used for exposing services to external cloud apps securely.

In an ideal world, like in the diagram above, you'd also use a custom agent ingress address, which brands ngrok's connectivity as your own for trust and security enforcement. But, it's something you need to contact us to enable, so it's also a nice-to-have you can turn on once you've built this new network.

Configure ngrok

Instead of manually starting endpoints with CLI commands, it’s best to use the ngrok configuration file. This ensures that all endpoints start consistently and can be managed as a background service.

version: "3"
agent: 
  authtoken: <YOUR_AUTH_TOKEN>

endpoints:
  - name: agent-api
    description: ngrok-agent-api
    url: agent.internal
    upstream:
        url: 4040

  - name: rdp-server
    url: tcp://rdp.internal:3389
    upstream:
        url: 3389

  - name: ssh-server
    url: tcp://ssh.internal:22
    binding: internal
    upstream:
        url: 22

Start ngrok as a background service

Running ngrok as a system service ensures it automatically starts on boot, restarts after crashes, and logs errors to the OS logging service.

ngrok service install --config /etc/ngrok.yml
ngrok service start

This will start all tunnels defined in the configuration file, ensure ngrok runs persistently in the background, and integrate with native OS service tooling.

Create cloud endpoints for persistent access

You can make requests to ngrok's API to create your cloud endpoints.

First, a cloud endpoint to control your agent API.

curl -X POST \
  -H "Authorization: Bearer <NGROK_API_KEY>" \
  -H "Content-Type: application/json" \
  -H "Ngrok-Version: 2" \
  -d '{
    "type": "cloud",
    "url": "https://agent.example.com/",
    "traffic_policy": {
      "on_http_request": [
        {
          "expressions": [{ "req.url.path.startsWith": "/agent" }],
          "actions": [{ "type": "forward-internal", "config": { "url": "https://agent.internal" } }]
        }
      ]
    }
  }' \
  https://api.ngrok.com/endpoints

Second, a TCP cloud endpoint to route your SSH sessions to tcp://ssh.internal:22. This is also a great opportunity to enable the ip-restrictions Traffic Policy action, which ensures that only you or other trusted folks can access the SSH or RDP services on the remote devices.

curl -X POST \
  -H "Authorization: Bearer <NGROK_API_KEY>" \
  -H "Content-Type: application/json" \
  -H "Ngrok-Version: 2" \
  -d '{ "url": "tcp://5.tcp.ngrok.io:23028",
          "type": "cloud",
          "bindings": ["public"],
          "traffic_policy": "on_tcp_connect:\n  - actions:\n      - type: forward-internal\n        config:\n          url: \"tcp://ssh.internal\"\n      - type: restrict-ips\n        config:\n          enforce: true\n          allow:\n            - 203.0.113.0/24\n          deny:\n            - 192.0.2.0/24" }' \
  https://api.ngrok.com/endpoints

You'll want to do the same with a TCP cloud endpoint that points your requests to the RDP service.

Use the agent API to dynamically create tunnels

The agent API allows you to programmatically manage tunnels, which lets you:

  1. Dynamically spin up a tunnel only when you need it
  2. Centralize and standardize how you manage tunnels no matter where you are or who is using them
  3. Restrict access control using IP-based Traffic Policy rules

For example, you can fire up a new SSH tunnel like so.

curl -X POST \
  -H "Content-Type: application/json" \
  -d '{
    "name": "ssh-session",
    "proto": "ssh",
    "addr": "10.1.1.50:22"
  }' \
  https://agent.example.com/api/tunnel

And access it through your cloud endpoint at tcp://5.tcp.ngrok.io:23028.

Secure remote access, the right way

You've now built that modernized and secure network to access any and all remote services using TCP and RDP.

Along the way, you’ve also eliminated nested SSH/RDP sessions to improve latency, and granted native access to remote services with security in mind. IP restrictions and access control policies keep your compliance in check, but you can also scale access across multiple sites with a single agent.

If you need some help getting this set up, our device gateway doc has generic advice, but you can always email me or the entire customer success team.

But, your first step: sign up for your free ngrok account and get that agent running!

Share this post
Device Gateway
Gateways
Production