Guardrails
Ship to production more confidently with Portkey Guardrails on your requests & responses
Last updated
Was this helpful?
Ship to production more confidently with Portkey Guardrails on your requests & responses
Last updated
Was this helpful?
LLMs are brittle - not just in API uptimes or their inexplicable 400
/500
errors, but also in their core behavior. You can get a response with a 200
status code that completely errors out for your app's pipeline due to mismatched output. With Portkey's Guardrails, we now help you enforce LLM behavior in real-time with our Guardrails on the Gateway pattern.
Using Portkey's Guardrail platform, you can now verify your LLM inputs AND outputs to be adhering to your specifed checks; and since Guardrails are built on top of our , you can orchestrate your request exactly the way you want - with actions ranging from denying the request, logging the guardrail result, creating an evals dataset, falling back to another LLM or prompt, retrying the request, and more.
Regex match - Check if the request or response text matches a regex pattern
JSON Schema - Check if the response JSON matches a JSON schema
Contains Code - Checks if the content contains code of format SQL, Python, TypeScript, etc.
Custom guardrail - If you are running a custom guardrail currently, you can also integrate it with Portkey
...and many more.
Portkey currently offers 20+ deterministic guardrails like the ones described above as well as LLM-based guardrails like Detect Gibberish
, Scan for prompt injection
, and more. These guardrails serve as protective barriers that help mitigate risks associated with Gen AI, ensuring its responsible and ethical deployment within organizations.
Putting Portkey Guardrails in production is just a 4-step process:
Create Guardrail Checks
Create Guardrail Actions
Enable Guardrail through Configs
Attach the Config to a Request
This flowchart shows how Portkey processes a Guardrails request:
Let's see in detail below:
On the "Guardrails" page, click on Create
and add your preferred Guardrail checks from the right sidebar.
Each Guardrail Check has a custom input field based on its usecase — just add the relevant details to the form and save your check.
This is where you will define a basic orchestration logic for your Guardrail.
Async
TRUE
This is the default
state
Run the Guardrail checks asynchronously along with the LLM request.
Will add no latency to your request
Useful when you only want to log guardrail checks without affecting the request
Async
FALSE
On Request
Run the Guardrail check BEFORE sending the request to the LLM
On Response
Run the Guardrail check BEFORE sending the response to the user
Will add latency to the request
Useful when your Guardrail critical and you want more orchestration over your request based on the Guardrail result
Deny
TRUE
On Request & Response
If any of the Guardrail checks FAIL
, the request will be killed with a 446
status code.
If all of the Guardrail checks SUCCEED
, the request/response will be sent further with a 200
status code.
This is useful when your Guardrails are critical and upon them failing, you can not run the request
We would advice running this action on a subset of your requests to first see the impact
Deny
FALSE
This is the default
state
On Request & Response
If any of the Guardrail checks FAIL
, the request will STILL be sent, but with a 246
status code.
If all of the Guardrail checks SUCCEED
, the request/response will be sent further with a 200
status code.
This is useful when you want to log the Guardrail result but do not want it to affect your result
On Success
Send Feedback
If all of the Guardrail checks PASS
, append your custom defined feedback to the request
We recommend setting up this action
This will help you build an "Evals dataset" of Guardrail results on your requests over time
On Failure
Send Feedback
If any of the Guardrail checks FAIL
, append your custom feedback to the request
We recommend setting up this action
This will help you build an "Evals dataset" of Guardrail results on your requests over time
Set the relevant actions you want with your checks, name your Guardrail and save it! When you save the Guardrail, you will get an associated $Guardrail_ID
that you can then add to your request.
This is where Portkey's magic comes into play. The Guardrail you created above is yet not an Active
guardrail because it is not attached to any request.
Configs is one of Portkey's most powerful features and is used to define all kinds of request orchestration - everything from caching, retries, fallbacks, timeouts, to load balancing.
before the request
OR after the request
Before Request Hook
before_request_hooks
[{"id":"$guardrail_id"}]
This key is used to run Guardrail checks
& actions
on the INPUT
.
After Request Hook
after_request_hooks
[{"id":"$guardrail_id"}]
This key is used to run Guardrail checks
& actions
on the OUTPUT
.
For asynchronous guardrails (async=
TRUE
), Portkey returns the standard, default status codes from the LLM providers — this is because the Guardrails verdict is not affecting how you orchestrate your requests. Portkey will only log the Guardrail result for you.
But for synchronous requests (async=
FALSE
), Portkey can orchestrate your requests based on the Guardrail verdict. The behaviour is dependent on the following:
Guardrail Check Verdict (PASS
or FAIL
) AND
Guardrail Action — DENY Setting (TRUE
or FALSE
)
Portkey sends different request status codes
corresponding to your set Guardrail behaviour.
PASS
FALSE
200
Guardrails have passed, request will be processed regardless
PASS
TRUE
200
Guardrails have passed, request will be processed regardless
FAIL
FALSE
246
Guardrails have failed, but the request should still be processed. Portkey introduces a new Status code to indicate this state.
FAIL
TRUE
446
Guardrails have failed, and the request should not be processed. Portkey introduces a new Status code to indicate this state.
246
& 446
Status CodesNow, while instantiating your Portkey client or while sending headers, just pass the Config ID.
Portkey Logs will show you detailed information about Guardrail results for each request.
Feedback & Guardrails
tab on the log drawer, you can seeOverview: How many checks passed
and how many failed
Verdict: Guardrail verdict for each of the checks in your Guardrail
Latency: Round trip time for each check in your Guardrail
Portkey will also show the feedback object logged for each request
Value
: The numerical feedback value you passed
Weight
: The numerical feedback weight
Metadata Key & Value
: Any custom metadata sent with the feedback
successfulChecks
: Which checks associated with this request passed
failedChecks
: Which checks associated with this request failed
erroredChecks
: If there were any checks that errored out along the way
On Portkey, you can also create the Guardrails in code and add them to your Configs. Read more about this here:
If you already have a custom guardrail pipeline where you send your inputs/outputs for evaluation, you can also integrate it with Portkey using a modular, custom webhook! Read more here:
Prompt Injection Checks: Preventing inputs that could alter the behavior of the AI model or manipulate its responses.
Moderation Checks: Ensuring responses do not contain offensive, harmful, or inappropriate content.
Compliance Checks: Verifying that inputs and outputs comply with regulatory requirements or organizational policies.
Security Checks: Blocking requests that contain potentially harmful content, such as SQL injection attempts or cross-site scripting (XSS) payloads.
By appropriately configuring Guardrail Actions, you can maintain the integrity and reliability of your AI app, ensuring that only safe and compliant requests are processed.
Portkey also integrates with your favourite Guardrail platforms like , , and more. Just add their API keys to Portkey and you can enable their guardrails policies on your Portkey calls!
You can create these Configs in Portkey UI, save them, and get an associated Config ID you can attach to your requests. .
For more, refer to the .