Portkey provides a robust and secure gateway to seamlessly integrate open-source and fine-tuned LLMs from Predibase into your applications. With Portkey, you can leverage powerful features like fast AI gateway, caching, observability, prompt management, and more, while securely managing your LLM API keys through a virtual key system.
Provider Slug: predibase
Portkey SDK Integration with Predibase
Using Portkey, you can call your Predibase models in the familar OpenAI-spec and try out your existing pipelines on Predibase fine-tuned models with 2 LOC change.
1. Install the Portkey SDK
Install the Portkey SDK in your project using npm or pip:
npm install --save portkey-ai
pip install portkey-ai
2. Initialize Portkey with the Virtual Key
To use Predibase with Portkey, , then add it to Portkey to create the virtual key.
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "VIRTUAL_KEY" // Your Predibase Virtual Key
})
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Replace with your Portkey API key
virtual_key="VIRTUAL_KEY" # Replace with your virtual key for Predibase
)
import OpenAI from "openai";
import { PORTKEY_GATEWAY_URL, createHeaders } from "portkey-ai";
const portkey = new OpenAI({
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
apiKey: "PORTKEY_API_KEY",
virtualKey: "PREDIBASE_VIRTUAL_KEY",
}),
});
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
portkey = OpenAI(
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
api_key="PORTKEY_API_KEY",
virtual_key="PREDIBASE_VIRTUAL_KEY"
)
)
3. Invoke Chat Completions on Predibase Serverless Endpoints
Sending Predibase Tenand ID
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'llama-3-8b ',
user: 'PREDIBASE_TENANT_ID'
});
console.log(chatCompletion.choices);
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'llama-3-8b',
user= "PREDIBASE_TENANT_ID"
)
print(completion)
Using Portkey, you can easily route to your dedicatedly deployed models as well. Just pass the dedicated deployment name in the model param:
model = "my-dedicated-mistral-deployment-name"
JSON Schema Mode
You can enforce JSON schema for all Predibase models - just set the response_format to json_object and pass the relevant schema while making your request. Portkey logs will show your JSON output separately
# Using Pydantic to define the schema
from pydantic import BaseModel, constr
# Define JSON Schema
class Character(BaseModel):
name: constr(max_length=10)
age: int
strength: int
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'llama-3-8b',
user= "PREDIBASE_TENANT_ID",
response_format={
"type": "json_object",
"schema": Character.schema(),
},
)
print(completion)
The complete list of features supported in the SDK are available on the link below.
You'll find more information in the relevant sections:
Predibase offers LLMs like Llama 3, Mistral, Gemma, etc. on its that you can query instantly.
Predibase expects your account tenant ID along with the API key in each request. With Portkey, you can send with the user param while making your request.