Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including .
With Portkey, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a system.
Provider Slug: anthropic
Portkey SDK Integration with Anthropic
Portkey provides a consistent API to interact with models from various providers. To integrate Anthropic with Portkey:
1. Install the Portkey SDK
Add the Portkey SDK to your application to interact with Anthropic's API through Portkey's gateway.
npm install --save portkey-ai
pip install portkey-ai
2. Initialize Portkey with the Virtual Key
To use Anthropic with Portkey, , then add it to Portkey to create your Anthropic virtual key.
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "VIRTUAL_KEY" // Your Anthropic Virtual Key
})
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Replace with your Portkey API key
virtual_key="VIRTUAL_KEY" # Replace with your virtual key for Anthropic
)
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
client = OpenAI(
api_key="ANTHROPIC_API_KEY",
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
api_key="PORTKEY_API_KEY",
provider="anthropic"
)
)
import OpenAI from "openai";
import { PORTKEY_GATEWAY_URL, createHeaders } from "portkey-ai";
const client = new OpenAI({
apiKey: "ANTHROPIC_API_KEY",
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
provider: "anthropic",
apiKey: "PORTKEY_API_KEY",
}),
});
3. Invoke Chat Completions with Anthropic
Use the Portkey instance to send requests to Anthropic. You can also override the virtual key directly in the API call if needed.
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'claude-3-opus-20240229',
max_tokens: 250 // Required field for Anthropic
});
console.log(chatCompletion.choices[0].message.content);
chat_completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'claude-3-opus-20240229',
max_tokens=250 # Required field for Anthropic
)
print(chat_completion.choices[0].message.content)
chat_completion = client.chat.completions.create(
messages = [{ "role": 'user', "content": 'Say this is a test' }],
model = 'claude-3-opus-20240229',
max_tokens = 250
)
print(chat_completion.choices[0].message.content)
With Portkey, we make Anthropic models interoperable with the OpenAI schema and SDK methods. So, instead of passing the system prompt separately, you can pass it as part of the messages body, similar to OpenAI:
const chatCompletion = await portkey.chat.completions.create({
messages: [
{ role: 'system', content: 'Your system prompt' },
{ role: 'user', content: 'Say this is a test' }
],
model: 'claude-3-opus-20240229',
max_tokens: 250
});
console.log(chatCompletion.choices);
completion = portkey.chat.completions.create(
messages= [
{ "role": 'system', "content": 'Your system prompt' },
{ "role": 'user', "content": 'Say this is a test' }
],
model= 'claude-3-opus-20240229',
max_tokens=250 # Required field for Anthropic
)
print(completion.choices)
Vision Chat Completion Usage
Portkey's multimodal Gateway fully supports Anthropic's vision models claude-3-sonnet, claude-3-haiku, claude-3-opus, and the latest, claude-3.5-sonnet
Portkey follows the OpenAI schema, which means you can send your image data to Anthropic in the same format as OpenAI.
Anthropic ONLY accepts base64-encoded images. Unlike OpenAI, it does not support image URLs.
With Portkey, you can use the same format to send base64-encoded images to both Anthropic and OpenAI models.
Here's an example using Anthropic claude-3.5-sonnet model
import base64
import httpx
from portkey_ai import Portkey
# Fetch and encode the image
image_url = "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
image_data = base64.b64encode(httpx.get(image1_url).content).decode("utf-8")
# Initialize the Portkey client
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", # Replace with your Portkey API key
virtualKey: "VIRTUAL_KEY" # Add your Anthropic virtual key
});
# Create the request
response = portkey.chat.completions.create(
model="claude-3-5-sonnet-20240620",
messages=[
{
"role": "system",
"content": "You are a helpful assistant, who describes imagse"
},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image1_data}"
}
}
]
}
],
max_tokens=1400,
)
print(response)
import Portkey from 'portkey-ai';
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
virtualKey: "VIRTUAL_KEY"
});
async function getChatCompletionFunctions() {
const response = await portkey.chat.completions.create({
model: "claude-3-5-sonnet-20240620",
messages: [
{
role: "system",
content: "You are a helpful assistant who describes images."
},
{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{
type: "image_url",
image_url: {
url: "data:image/jpeg;base64,BASE64_IMAGE_DATA"
}
}
]
}
],
max_tokens: 300
});
console.log(response);
}
// Call the function
getChatCompletionFunctions();
On completion, the request will get logged in Portkey where any image inputs or outputs can be viewed. Portkey will automatically render the base64 images to help you debug any issues quickly.
For more info, check out this guide:
Prompt Caching
Portkey also works with Anthropic's new prompt caching feature and helps you save time & money for all your Anthropic requests. Refer to this guide to learn how to enable it:
Managing Anthropic Prompts
Once you're ready with your prompt, you can use the portkey.prompts.completions.create interface to use the prompt in your application.
Next Steps
The complete list of features supported in the SDK are available on the link below.
You'll find more information in the relevant sections:
You can manage all prompts to Anthropic in the . All the current models of Anthropic are supported and you can easily start testing different prompts.