
Introducing http-request: Call your services directly from Traffic Policy
You’ve been asking for more control over not just where traffic goes, but how you can tie together its logic with the services you’ve already deployed. I’m happy to say we’ve just added a powerful new action to our Traffic Policy engine, http-request
, available now in developer preview.
It lets you make HTTP calls to internal services (via Internal Endpoints) right from your traffic policy so you can validate requests, trigger side chain events, or route traffic dynamically without writing new middleware or deploying another gateway.
If you’ve ever shipped a one-off service just to validate a token or call a webhook, you already know why this matters. And now you can do it all directly from your traffic policy.
Let me show you how it works.
Call internal services as part of your policy
First, I’ll start off by showing you a real example of using the new http-request
action. This traffic policy calls an internal auth service to check a token before letting traffic through:
on_http_request:
- name: ValidateToken
actions:
- type: http-request
config:
url: https://auth.internal/validate
method: POST
headers:
content-type: application/json
body: '{ "token": "${req.headers[\"authorization\"][0]}" }'
on_error: halt
If the validation fails, ngrok stops traffic without ever reaching your app. You can use CEL directly inside the body, URL, query parameters, or headers.
Use it for auth, logging, or internal routing
You can use the new http-request
action for many different use cases:
- Auth: Validate incoming requests against an identity service
- Logging: Send events or errors to a collector
- Chaining: Make requests to internal services to transform or inspect traffic, like “pre-tiering” requests based customer data
Because policies can run on both the on_http_request
and on_http_response
phases, you have full control before and after your service has handled the request.
Want to call Slack or OpenAI?
The http-request
action only supports internal endpoints by design. But what if you want to call an external or public API like Slack or OpenAI?
Easy, you can make them an internal endpoint with the ngrok CLI:
ngrok http https://hooks.slack.com \
--url https://slack-hooks.internal \
--host-header rewrite
This command sets up a ngrok internal endpoint (https://slack-hooks.internal
) that forwards traffic to the public Slack API. And don’t forget the --host-header rewrite
flag, that's important so that way the host header is hooks.slack.com
not slack-hooks.internal
!
Now, from the perspective of your Traffic Policy, Slack is an internal service. You can now create a Cloud Endpoint and call it directly from your policy like this:
on_http_response:
- name: NotifySlackOnError
expressions:
- res.status_code == 500
actions:
- type: http-request
config:
url: https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
method: POST
headers:
content-type: application/json
body: '{ "text": "🚨 500 error on ${req.url}" }'
- type: custom-response
config:
headers:
content-type: application/json
status_code: 200
body: ${res.body}
What about OpenAI? Same thing, create an internal service that forwards to the OpenAI API:
ngrok http https://api.openai.com \
--url https://openai.internal \
--host-header rewrite
Then update your Cloud Endpoint to use the http-request
action to call the chat/completions
endpoint with a static prompt:
on_http_request:
- name: CallOpenAI
actions:
- type: http-request
config:
url: https://openai.internal/v1/chat/completions
method: POST
headers:
content-type: application/json
authorization: 'Bearer <YOUR_OPENAI_API_KEY>'
body: '{ "model": "gpt-4", "messages": [{ "role": "user", "content": "What is the meaning of life?" }] }'
timeout: 10s
- type: custom-response
config:
headers:
content-type: application/json
status_code: 200
body: ${actions.ngrok.http_request.res.body}
And just like that, you’ve setup your endpoint to call OpenAI directly from your Cloud Endpoint with no OpenAI SDK required.
Add resilience with retry logic
What happens if the service you're calling fails temporarily? The http-request
action supports automatic retries with full control over when and why to retry.
Requests are retried up to three times, to control when and how retries are decided you can create a custom CEL expression on the retry_condition
configuration option. Inside of the retry CEL expression you have access to the following variables:
attempts
: number of attempts so farlast_attempt.req
: the last HTTP request objectlast_attempt.res
: the last HTTP response objectlast_attempt.error
: any error object that occurred (with acode
andmessage
)
Here’s an example CEL expression that retries only on 500
status codes:
retry_condition: last_attempt.res.status_code == 500
Want to make your internal auth call more resilient? Add a timeout
and retry_condition
that safely handles server errors without retrying on expected denials like a 401
or 403
:
on_http_request:
- name: ResilientAuth
actions:
- type: http-request
config:
url: https://auth.internal/validate
method: POST
headers:
content-type: application/json
body: '{ "token": "${req.headers[\"authorization\"][0]}" }'
timeout: 5s
retry_condition: last_attempt.res.status_code >= 500 && last_attempt.res.status_code < 600
on_error: halt
This ensures your auth check won’t fail due to a single blip.
What else can you do with http-request
?
Plenty. Chain internal services. Trigger workflows. Log action results. Validate authen. Wire up your own mini service mesh with nothing but YAML. Okay, maybe not everything… but that’s where you come in.
First, register your interest in the developer preview for http-request
and we'll let you know when it's active on your account.
Once you've had a chance to play with http-request
, hop into our Discord and let me know how you're using it. Got feedback? Features you wish it had? I want to hear it all.
Until then, check out our resources:
Happy routing.