Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including all the text generation models supported by .
With Portkey, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a system.
Provider Slug: huggingface
Portkey SDK Integration with Huggingface
Portkey provides a consistent API to interact with models from various providers. To integrate Huggingface with Portkey:
1. Install the Portkey SDK
Add the Portkey SDK to your application to interact with Huggingface's API through Portkey's gateway.
npm install --save portkey-ai
pip install portkey-ai
2. Initialize Portkey with the Virtual Key
To use Huggingface with Portkey, , then add it to Portkey to create your Huggingface virtual key.
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "VIRTUAL_KEY", // Your Huggingface Access Token
huggingfaceBaseUrl: "HUGGINGFACE_DEDICATED_URL" // Optional: Use this if you have a dedicated server hosted on Huggingface
})
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Replace with your Portkey API key
virtual_key="VIRTUAL_KEY", # Replace with your virtual key for Huggingface
huggingface_base_url="HUGGINGFACE_DEDICATED_URL" # Optional: Use this if you have a dedicated server hosted on Huggingface
)
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
client = OpenAI(
api_key="HUGGINGFACE_ACCESS_TOKEN",
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
api_key="PORTKEY_API_KEY",
provider="huggingface",
huggingface_base_url="HUGGINGFACE_DEDICATED_URL"
)
)
import OpenAI from "openai";
import { PORTKEY_GATEWAY_URL, createHeaders } from "portkey-ai";
const client = new OpenAI({
apiKey: "HUGGINGFACE_ACCESS_TOKEN",
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
provider: "huggingface",
apiKey: "PORTKEY_API_KEY",
huggingfaceBaseUrl: "HUGGINGFACE_DEDICATED_URL"
}),
});
3. Invoke Chat Completions with Huggingface
Use the Portkey instance to send requests to Huggingface. You can also override the virtual key directly in the API call if needed.
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'meta-llama/Meta-Llama-3.1-8B-Instruct', // make sure your model is hot
});
console.log(chatCompletion.choices[0].message.content);
chat_completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot
)
print(chat_completion.choices[0].message.content)
chat_completion = client.chat.completions.create(
messages = [{ "role": 'user', "content": 'Say this is a test' }],
model = 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot
)
print(chat_completion.choices[0].message.content)
async function main() {
const chatCompletion = await client.chat.completions.create({
model: "meta-llama/meta-llama-3.1-8b-instruct", // make sure your model is hot
messages: [{ role: "user", content: "How many points to Gryffindor?" }],
});
console.log(chatCompletion.choices[0].message.content);
}
main();
Next Steps
The complete list of features supported in the SDK are available on the link below.
You'll find more information in the relevant sections: