Learn to integrate OpenAI with Portkey, enabling seamless completions, prompt management, and advanced functionalities like streaming, function calling and fine-tuning.
Portkey has native integrations with OpenAI SDKs for Node.js, Python, and its REST APIs. For OpenAI integration using other frameworks, explore our partnerships, including , , among .
Provider Slug: openai
Using the Portkey Gateway
To integrate the Portkey gateway with OpenAI,
Set the baseURL to the Portkey Gateway URL
Include Portkey-specific headers such as provider, apiKeyand others.
Here's how to apply it to a chat completion request:
Install the Portkey SDK in your application
npm i --save portkey-ai
Next, insert the Portkey-specific code as shown in the highlighted lines to your OpenAI completion calls. PORTKEY_GATEWAY_URL is portkey's gateway URL to route your requests and createHeaders is a convenience function that generates the headers object. ()
import OpenAI from 'openai'; // We're using the v4 SDK
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'
const openai = new OpenAI({
apiKey: 'OPENAI_API_KEY', // defaults to process.env["OPENAI_API_KEY"],
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
provider: "openai",
apiKey: "PORTKEY_API_KEY" // defaults to process.env["PORTKEY_API_KEY"]
})
});
async function main() {
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-4-turbo',
});
console.log(chatCompletion.choices);
}
main();
Install the Portkey SDK in your application
pip install portkey-ai
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
client = OpenAI(
api_key="OPENAI_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY")
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
provider="openai",
api_key="PORTKEY_API_KEY" # defaults to os.environ.get("PORTKEY_API_KEY")
)
)
chat_complete = client.chat.completions.create(
model="gpt-4-turbo",
messages=[{"role": "user", "content": "Say this is a test"}],
)
print(chat_complete.choices[0].message.content)
This request will be automatically logged by Portkey. You can view this in your logs dashboard. Portkey logs the tokens utilized, execution time, and cost for each request. Additionally, you can delve into the details to review the precise request and response data.
Track End-User IDs
Portkey allows you to track user IDs passed with the user parameter in OpenAI requests, enabling you to monitor user-level costs, requests, and more.
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-4o",
user: "user_12345",
});
response = portkey.chat.completions.create(
model="gpt-4o",
messages=[{ role: "user", content: "Say this is a test" }]
user="user_123456"
)
When you include the user parameter in your requests, Portkey logs will display the associated user ID, as shown in the image below:
In addition to the user parameter, Portkey allows you to send arbitrary custom metadata with your requests. This powerful feature enables you to associate additional context or information with each request, which can be useful for analysis, debugging, or other custom use cases.
Using the Prompts API
Create a prompt template with variables and set the hyperparameters.
Use this prompt in your codebase using the Portkey SDK.
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
})
// Make the prompt creation call with the variables
const promptCompletion = await portkey.prompts.completions.create({
promptID: "Your Prompt ID",
variables: {
// The variables specified in the prompt
}
})
// We can also override the hyperparameters
const promptCompletion = await portkey.prompts.completions.create({
promptID: "Your Prompt ID",
variables: {
// The variables specified in the prompt
},
max_tokens: 250,
presence_penalty: 0.2
})
from portkey_ai import Portkey
client = Portkey(
api_key="PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
)
prompt_completion = client.prompts.completions.create(
prompt_id="Your Prompt ID",
variables={
# The variables specified in the prompt
}
)
print(prompt_completion)
# We can also override the hyperparameters
prompt_completion = client.prompts.completions.create(
prompt_id="Your Prompt ID",
variables={
# The variables specified in the prompt
},
max_tokens=250,
presence_penalty=0.2
)
print(prompt_completion)
curl -X POST "https://api.portkey.ai/v1/prompts/:PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
# The variables to use
},
"max_tokens": 250, # Optional
"presence_penalty": 0.2 # Optional
}'
Observe how this streamlines your code readability and simplifies prompt updates via the UI without altering the codebase.
Advanced Use Cases
Streaming Responses
Portkey supports streaming responses using Server Sent Events (SSE).
import OpenAI from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'
const openai = new OpenAI({
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
provider: "openai",
apiKey: "PORTKEY_API_KEY" // defaults to process.env["PORTKEY_API_KEY"]
})
});
async function main() {
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Say this is a test' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
}
main();
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
client = OpenAI(
api_key="OPENAI_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY")
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
provider="openai",
api_key="PORTKEY_API_KEY" # defaults to os.environ.get("PORTKEY_API_KEY")
)
)
chat_complete = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=True
)
for chunk in chat_complete:
print(chunk.choices[0].delta.content, end="", flush=True)
Using Vision Models
Portkey's multimodal Gateway fully supports OpenAI vision models as well. See this guide for more info:
Function Calling
Function calls within your OpenAI or Portkey SDK operations remain standard. These logs will appear in Portkey, highlighting the utilized functions and their outputs.
Additionally, you can define functions within your prompts and invoke the portkey.prompts.completions.create method as above.
Fine-Tuning
Image Generation
Portkey supports multiple modalities for OpenAI and you can make image generation requests through Portkey's AI Gateway the same way as making completion calls.
// Define the OpenAI client as shown above
const image = await openai.images.generate({
model:"dall-e-3",
prompt:"Lucy in the sky with diamonds",
size:"1024x1024"
})
# Define the OpenAI client as shown above
image = openai.images.generate(
model="dall-e-3",
prompt="Lucy in the sky with diamonds",
size="1024x1024"
)
Portkey's fast AI gateway captures the information about the request on your Portkey Dashboard. On your logs screen, you'd be able to see this request with the request and response.
Audio - Transcription, Translation, and Text-to-Speech
Portkey's multimodal Gateway also supports the audio methods on OpenAI API. Check out the below guides for more info:
Managing OpenAI Projects & Organizations in Portkey
When integrating OpenAI with Portkey, you can specify your OpenAI organization and project IDs along with your API key. This is particularly useful if you belong to multiple organizations or are accessing projects through a legacy user API key.
Specifying the organization and project IDs helps you maintain better control over your access rules, usage, and costs.
In Portkey, you can add your Org & Project details by,
Creating your Virtual Key
Defining a Gateway Config
Passing Details in a Request
Let's explore each method in more detail.
Using Virtual Keys
When selecting OpenAI from the dropdown menu while creating a virtual key, Portkey automatically displays optional fields for the organization ID and project ID alongside the API key field.
Portkey takes budget management a step further than OpenAI. While OpenAI allows setting budget limits per project, Portkey enables you to set budget limits for each virtual key you create. For more information on budget limits, refer to this documentation:
Using The Gateway Config
You can also specify the organization and project details in the gateway config, either at the root level or within a specific target.
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY",
provider="openai",
authorization="Bearer OPENAI_API_KEY",
openai_organization="org-xxxxxxxxx",
openai_project="proj_xxxxxxxxx",
)
chat_complete = portkey.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Say this is a test"}],
)
print(chat_complete.choices[0].message.content)
import Portkey from "portkey-ai";
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
provider: "openai",
Authorization: "Bearer OPENAI_API_KEY",
openaiOrganization: "org-xxxxxxxxxxx",
openaiProject: "proj_xxxxxxxxxxxxx",
});
async function main() {
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-4o",
});
console.log(chatCompletion.choices);
}
main();
Portkey Features
Portkey supports the complete host of it's functionality via the OpenAI SDK so you don't need to migrate away from it.
Please find more information in the relevant sections:
Next, insert the Portkey-specific code as shown in the highlighted lines to your OpenAI function calls. PORTKEY_GATEWAY_URL is portkey's gateway URL to route your requests and createHeaders is a convenience function that generates the headers object. ()
The same integration approach applies to APIs for , , , , , , and .
If you are looking for a way to add your Org ID & Project ID to the requests, head over to .
Portkey also supports creating and managing prompt templates in the . This enables the collaborative development of prompts directly through the user interface.
Please refer to our fine-tuning guides to take advantage of Portkey's advanced capabilities.
More information on image generation is available in the .
, then add it to Portkey to create the virtual key that can be used throughout Portkey.