Retrieve Prompts
See how to use Portkey's prompt templates with OpenAI (or any other provider) SDKs
This feature is available on all Portkey plans.
You can retrieve your saved prompts on Portkey using the /prompts/$PROMPT_ID/render
endpoint. Portkey returns a JSON containing your prompt or messages body along with all the saved parameters that you can directly use in any request.
This is helpful if you are required to use provider SDKs and can not use the Portkey SDK in production. (Example of how to use Portkey prompt templates with OpenAI SDK)
Using the Render
Endpoint/Method
Render
Endpoint/MethodMake a request to
https://api.portkey.ai/v1/prompts/$PROMPT_ID/render
with your prompt IDPass your Portkey API key with
x-portkey-api-key
in the headerSend up the variables in your payload with
{ "variables": { "VARIABLE_NAME": "VARIABLE_VALUE" } }
That's it! See it in action:
curl -X POST "https://api.portkey.ai/v1/prompts/$PROMPT_ID/render" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {"movie":"Dune 2"}
}'
The Output:
{
"success": true,
"data": {
"model": "gpt-4",
"n": 1,
"top_p": 1,
"max_tokens": 256,
"temperature": 0,
"presence_penalty": 0,
"frequency_penalty": 0,
"messages": [
{
"role": "system",
"content": "You're a helpful assistant."
},
{
"role": "user",
"content": "Who directed Dune 2?"
}
]
}
}
Updating Prompt Params While Retrieving the Prompt
If you want to change any model params (like temperature
, messages body
etc) while retrieving your prompt from Portkey, you can send the override params in your render
payload.
Portkey will send back your prompt with overridden params, without making any changes to the saved prompt on Portkey.
curl -X POST "https://api.portkey.ai/v1/prompts/$PROMPT_ID/render" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {"movie":"Dune 2"},
"model": "gpt-3.5-turbo",
"temperature": 2
}'
Based on the above snippet, model
and temperature
params in the retrieved prompt will be overridden with the newly passed values
The New Output:
{
"success": true,
"data": {
"model": "gpt-3.5-turbo",
"n": 1,
"top_p": 1,
"max_tokens": 256,
"temperature": 2,
"presence_penalty": 0,
"frequency_penalty": 0,
"messages": [
{
"role": "system",
"content": "You're a helpful assistant."
},
{
"role": "user",
"content": "Who directed Dune 2?"
}
]
}
}
Using the render
Output in a New Request
render
Output in a New RequestHere's how you can take the output from the render
API and use it for making a call. We'll take example of OpenAI SDKs, but you can use it simlarly for any other provider SDK as well.
import Portkey from 'portkey-ai';
import OpenAI from 'openai';
// Retrieving the Prompt from Portkey
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY"
})
async function getPromptTemplate() {
const render_response = await portkey.prompts.render({
promptID: "PROMPT_ID",
variables: { "movie":"Dune 2" }
})
return render_response.data;
}
// Making a Call to OpenAI with the Retrieved Prompt
const openai = new OpenAI({
apiKey: 'OPENAI_API_KEY',
baseURL: 'https://api.portkey.ai/v1',
defaultHeaders: {
'x-portkey-provider': 'openai',
'x-portkey-api-key': 'PORTKEY_API_KEY',
'Content-Type': 'application/json',
}
});
async function main() {
const PROMPT_TEMPLATE = await getPromptTemplate();
const chatCompletion = await openai.chat.completions.create(PROMPT_TEMPLATE);
console.log(chatCompletion.choices[0]);
}
main();
Last updated
Was this helpful?