Portkey Docs
HomeAPIIntegrationsChangelog
  • Introduction
    • What is Portkey?
    • Make Your First Request
    • Feature Overview
  • Integrations
    • LLMs
      • OpenAI
        • Structured Outputs
        • Prompt Caching
      • Anthropic
        • Prompt Caching
      • Google Gemini
      • Groq
      • Azure OpenAI
      • AWS Bedrock
      • Google Vertex AI
      • Bring Your Own LLM
      • AI21
      • Anyscale
      • Cerebras
      • Cohere
      • Fireworks
      • Deepbricks
      • Deepgram
      • Deepinfra
      • Deepseek
      • Google Palm
      • Huggingface
      • Inference.net
      • Jina AI
      • Lingyi (01.ai)
      • LocalAI
      • Mistral AI
      • Monster API
      • Moonshot
      • Nomic
      • Novita AI
      • Ollama
      • OpenRouter
      • Perplexity AI
      • Predibase
      • Reka AI
      • SambaNova
      • Segmind
      • SiliconFlow
      • Stability AI
      • Together AI
      • Voyage AI
      • Workers AI
      • ZhipuAI / ChatGLM / BigModel
      • Suggest a new integration!
    • Agents
      • Autogen
      • Control Flow
      • CrewAI
      • Langchain Agents
      • LlamaIndex
      • Phidata
      • Bring Your own Agents
    • Libraries
      • Autogen
      • DSPy
      • Instructor
      • Langchain (Python)
      • Langchain (JS/TS)
      • LlamaIndex (Python)
      • LibreChat
      • Promptfoo
      • Vercel
        • Vercel [Depricated]
  • Product
    • Observability (OpenTelemetry)
      • Logs
      • Tracing
      • Analytics
      • Feedback
      • Metadata
      • Filters
      • Logs Export
      • Budget Limits
    • AI Gateway
      • Universal API
      • Configs
      • Multimodal Capabilities
        • Image Generation
        • Function Calling
        • Vision
        • Speech-to-Text
        • Text-to-Speech
      • Cache (Simple & Semantic)
      • Fallbacks
      • Automatic Retries
      • Load Balancing
      • Conditional Routing
      • Request Timeouts
      • Canary Testing
      • Virtual Keys
        • Budget Limits
    • Prompt Library
      • Prompt Templates
      • Prompt Partials
      • Retrieve Prompts
      • Advanced Prompting with JSON Mode
    • Guardrails
      • List of Guardrail Checks
        • Patronus AI
        • Aporia
        • Pillar
        • Bring Your Own Guardrails
      • Creating Raw Guardrails (in JSON)
    • Autonomous Fine-tuning
    • Enterprise Offering
      • Org Management
        • Organizations
        • Workspaces
        • User Roles & Permissions
        • API Keys (AuthN and AuthZ)
      • Access Control Management
      • Budget Limits
      • Security @ Portkey
      • Logs Export
      • Private Cloud Deployments
        • Architecture
        • AWS
        • GCP
        • Azure
        • Cloudflare Workers
        • F5 App Stack
      • Components
        • Log Store
          • MongoDB
    • Open Source
    • Portkey Pro & Enterprise Plans
  • API Reference
    • Introduction
    • Authentication
    • OpenAPI Specification
    • Headers
    • Response Schema
    • Gateway Config Object
    • SDK
  • Provider Endpoints
    • Supported Providers
    • Chat
    • Embeddings
    • Images
      • Create Image
      • Create Image Edit
      • Create Image Variation
    • Audio
      • Create Speech
      • Create Transcription
      • Create Translation
    • Fine-tuning
      • Create Fine-tuning Job
      • List Fine-tuning Jobs
      • Retrieve Fine-tuning Job
      • List Fine-tuning Events
      • List Fine-tuning Checkpoints
      • Cancel Fine-tuning
    • Batch
      • Create Batch
      • List Batch
      • Retrieve Batch
      • Cancel Batch
    • Files
      • Upload File
      • List Files
      • Retrieve File
      • Retrieve File Content
      • Delete File
    • Moderations
    • Assistants API
      • Assistants
        • Create Assistant
        • List Assistants
        • Retrieve Assistant
        • Modify Assistant
        • Delete Assistant
      • Threads
        • Create Thread
        • Retrieve Thread
        • Modify Thread
        • Delete Thread
      • Messages
        • Create Message
        • List Messages
        • Retrieve Message
        • Modify Message
        • Delete Message
      • Runs
        • Create Run
        • Create Thread and Run
        • List Runs
        • Retrieve Run
        • Modify Run
        • Submit Tool Outputs to Run
        • Cancel Run
      • Run Steps
        • List Run Steps
        • Retrieve Run Steps
    • Completions
    • Gateway for Other API Endpoints
  • Portkey Endpoints
    • Configs
      • Create Config
      • List Configs
      • Retrieve Config
      • Update Config
    • Feedback
      • Create Feedback
      • Update Feedback
    • Guardrails
    • Logs
      • Insert a Log
      • Log Exports [BETA]
        • Retrieve a Log Export
        • Update a Log Export
        • List Log Exports
        • Create a Log Export
        • Start a Log Export
        • Cancel a Log Export
        • Download a Log Export
    • Prompts
      • Prompt Completion
      • Render
    • Virtual Keys
      • Create Virtual Key
      • List Virtual Keys
      • Retrieve Virtual Key
      • Update Virtual Key
      • Delete Virtual Key
    • Analytics
      • Graphs - Time Series Data
        • Get Requests Data
        • Get Cost Data
        • Get Latency Data
        • Get Tokens Data
        • Get Users Data
        • Get Requests per User
        • Get Errors Data
        • Get Error Rate Data
        • Get Status Code Data
        • Get Unique Status Code Data
        • Get Rescued Requests Data
        • Get Cache Hit Rate Data
        • Get Cache Hit Latency Data
        • Get Feedback Data
        • Get Feedback Score Distribution Data
        • Get Weighted Feeback Data
        • Get Feedback Per AI Models
      • Summary
        • Get All Cache Data
      • Groups - Paginated Data
        • Get User Grouped Data
        • Get Model Grouped Data
        • Get Metadata Grouped Data
    • API Keys [BETA]
      • Update API Key
      • Create API Key
      • Delete an API Key
      • Retrieve an API Key
      • List API Keys
    • Admin
      • Users
        • Retrieve a User
        • Retrieve All Users
        • Update a User
        • Remove a User
      • User Invites
        • Invite a User
        • Retrieve an Invite
        • Retrieve All User Invites
        • Delete a User Invite
      • Workspaces
        • Create Workspace
        • Retrieve All Workspaces
        • Retrieve a Workspace
        • Update Workspace
        • Delete a Workspace
      • Workspace Members
        • Add a Workspace Member
        • Retrieve All Workspace Members
        • Retrieve a Workspace Member
        • Update Workspace Member
        • Remove Workspace Member
  • Guides
    • Getting Started
      • A/B Test Prompts and Models
      • Tackling Rate Limiting
      • Function Calling
      • Image Generation
      • Getting started with AI Gateway
      • Llama 3 on Groq
      • Return Repeat Requests from Cache
      • Trigger Automatic Retries on LLM Failures
      • 101 on Portkey's Gateway Configs
    • Integrations
      • Llama 3 on Portkey + Together AI
      • Introduction to GPT-4o
      • Anyscale
      • Mistral
      • Vercel AI
      • Deepinfra
      • Groq
      • Langchain
      • Mixtral 8x22b
      • Segmind
    • Use Cases
      • Few-Shot Prompting
      • Enforcing JSON Schema with Anyscale & Together
      • Detecting Emotions with GPT-4o
      • Build an article suggestion app with Supabase pgvector, and Portkey
      • Setting up resilient Load balancers with failure-mitigating Fallbacks
      • Run Portkey on Prompts from Langchain Hub
      • Smart Fallback with Model-Optimized Prompts
      • How to use OpenAI SDK with Portkey Prompt Templates
      • Setup OpenAI -> Azure OpenAI Fallback
      • Fallback from SDXL to Dall-e-3
      • Comparing Top10 LMSYS Models with Portkey
      • Build a chatbot using Portkey's Prompt Templates
  • Support
    • Contact Us
    • Developer Forum
    • Common Errors & Resolutions
    • December '23 Migration
    • Changelog
Powered by GitBook
On this page
  • 1. Creating a Prompt Template
  • 2. Retrieving the prompt template
  • 3. Sending requests through OpenAI SDK
  • Bonus: Using Portkey SDK
  • Conclusion

Was this helpful?

Edit on GitHub
  1. Guides
  2. Use Cases

How to use OpenAI SDK with Portkey Prompt Templates

Portkey's Prompt Playground allows you to test and tinker with various hyperparameters without any external dependencies and deploy them to production seamlessly. Moreover, all team members can use the same prompt template, ensuring that everyone works from the same source of truth.

Right within OpenAI SDK along with Portkey APIs, you can use prompt templates to achieve this.

1. Creating a Prompt Template

Portkey's prompt playground enables you to experiment with various LLM providers. It acts as a definitive source of truth for your team, and it versions each snapshot of model parameters, allowing for easy rollback. We want to create a chat completion prompt with gpt4 that tells a story about any user-desired topic.

To do this:

  1. Go to www.portkey.ai

  2. Opens a Dashboard

    1. Click on Prompts and then the Create button.

  3. You are now on Prompt Playground.

Spend some time playing around with different prompt inputs and changing the hyperparameters. The following settings seemed most suitable and generated a story that met expectations.

The list of parameters in my prompt template:

System

You are a very good storyteller who covers various topics for the kids. You narrate them in very intriguing and interesting ways. You tell the story in less than 3 paragraphs.

User

Tell me a story about {{topic}}

Max Tokens

512

Temperature

0.9

Frequency Penalty

-0.2

When you look closely at the description for the User role, you find {{topic}}. Portkey treats them as dynamic variables, so a string can be passed to this prompt at runtime. This prompt is much more useful since it generates stories on any topic.

Once you are happy with the Prompt Template, hit Save Prompt. The Prompts page displays saved prompt templates and their corresponding prompt ID, serving as a reference point in our code.

Next up, let’s see how to use the created prompt template to generate chat completions through OpenAI SDK.

2. Retrieving the prompt template

Fire up your code editor and import the request client, axios. This will allow you to POST to the Portkey's render endpoint and retrieve prompt details that can be used with OpenAI SDK.

import axios from 'axios';

const PROMPT_ID = '<prompt-id>';
const PORTKEYAI_API_KEY = '<api_key>';

const url = `https://api.portkey.ai/v1/prompts/${PROMPT_ID}/render`;

const headers = {
  'Content-Type': 'application/json',
  'x-portkey-api-key': PORTKEYAI_API_KEY
};

const data = {
  variables: { topic: 'Tom and Jerry' }
};

let {
  data: { data: promptDetail }
} = await axios.post(url, data, { headers });

console.log(promptDetail);

We get prompt details as a JS object logged to the console:

{
  model: 'gpt-4',
  n: 1,
  top_p: 1,
  max_tokens: 512,
  temperature: 0.9,
  presence_penalty: 0,
  frequency_penalty: -0.2,
  messages: [
    {
      role: 'system',
      content: 'You are a very good storyteller who covers various topics for the kids. You narrate them in very intriguing and interesting ways.  You tell the story in less than 3 paragraphs.'
    },
    { role: 'user', content: 'Tell me a story about Tom and Jerry' }
  ]
}

3. Sending requests through OpenAI SDK

This section will teach you to use the prompt details JS object we retrieved earlier and pass it as an argument to the instance of the OpenAI SDK when making the chat completions call.

Let’s import the necessary libraries and create a client instance from the OpenAI SDK.

import OpenAI from 'openai';
import { createHeaders, PORTKEY_GATEWAY_URL } from 'portkey-ai';

const client = new OpenAI({
  apiKey: 'USES_VIRTUAL_KEY',
  baseURL: PORTKEY_GATEWAY_URL,
  defaultHeaders: createHeaders({
    provider: 'openai',
    apiKey: `${PORTKEYAI_API_KEY}`,
    virtualKey: `${OPENAI_VIRTUAL_KEY}`
  })
});

The prompt details we retrieved are passed as an argument to the chat completions creation method.

let TomAndJerryStory = await generateStory('Tom and Jerry');
console.log(TomAndJerryStory);

async function generateStory(topic) {
  const data = {
    variables: { topic: String(topic) }
  };

  let {
    data: { data: promptDetail }
  } = await axios.post(url, data, { headers });

  const chatCompletion = await client.chat.completions.create(promptDetail);

  return chatCompletion.choices[0].message.content;
}

This time, run your code and see the story we set out to generate logged to the console!

In the heart of a bustling city, lived an eccentric cat named Tom and a witty little mouse named Jerry. Tom, always trying to catch Jerry, maneuvered himself th...(truncated)

Bonus: Using Portkey SDK

The official Portkey Client SDK has a prompts completions method that is similar to chat completions’ OpenAI signature. You can invoke a prompt template just by passing arguments to promptID and variables parameters.

const promptCompletion = await portkey.prompts.completions.create({
  promptID: 'Your Prompt ID',
  variables: {
    topic: 'Tom and Jerry'
  }
});

Conclusion

We’ve now finished writing a some NodeJS program that retrieves the prompt details from the Prompt Playground using prompt ID. Then successfully made a chat completion call using OpenAI SDK to generate a story with the desired topic.

We can use this approach to focus on improving prompt quality with all the LLMs supported, simply reference them at the code runtime.

Show me the entire code
import axios from 'axios';
import OpenAI from 'openai';
import { createHeaders, PORTKEY_GATEWAY_URL } from 'portkey-ai';

const PROMPT_ID = 'xxxxxx';
const PORTKEYAI_API_KEY = 'xxxxx';
const OPENAI_VIRTUAL_KEY = 'xxxx';

const url = `https://api.portkey.ai/v1/prompts/${PROMPT_ID}/render`;

const headers = {
  'Content-Type': 'application/json',
  'x-portkey-api-key': PORTKEYAI_API_KEY
};

const client = new OpenAI({
  apiKey: 'USES_VIRTUAL_KEY',
  baseURL: PORTKEY_GATEWAY_URL,
  defaultHeaders: createHeaders({
    provider: 'openai',
    apiKey: `${PORTKEYAI_API_KEY}`,
    virtualKey: `${OPENAI_VIRTUAL_KEY}`
  })
});

let TomAndJerryStory = await generateStory('Tom and Jerry');
console.log(TomAndJerryStory);

async function generateStory(topic) {
  const data = {
    variables: { topic: String(topic) }
  };

  let {
    data: { data: promptDetail }
  } = await axios.post(url, data, { headers });

  const chatCompletion = await client.chat.completions.create(promptDetail);

  return chatCompletion.choices[0].message.content;
}
PreviousSmart Fallback with Model-Optimized PromptsNextSetup OpenAI -> Azure OpenAI Fallback

Last updated 1 year ago

Was this helpful?

We will use axios to make a POST call to /prompts/${PROMPT_ID}/render endpoint along with headers (includes ) and body that includes the prompt variables required in the prompt template.

For more information about Render API, refer to the .

We are importing portkey-ai to use its utilities to change the base URL and the default headers. If you are wondering what virtual keys are, refer to .

Portkey API Key
docs
Portkey Vault documentation