Portkey's suite of features - AI gateway, observability, prompt management, and continuous fine-tuning are all enabled for the OSS models (Llama2, Mistral, Zephyr, and more) available on Anyscale endpoints.
Provider Slug: anyscale
Portkey SDK Integration with Anyscale
1. Install the Portkey SDK
npm install --save portkey-ai
pip install portkey-ai
2. Initialize Portkey with Anyscale Virtual Key
To use Anyscale with Portkey, , then add it to Portkey to create the virtual key.
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "ANYSCALE_VIRTUAL_KEY" // Your Anyscale Virtual Key
})
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Replace with your Portkey API key
virtual_key="ANYSCALE_VIRTUAL_KEY" # Replace with your virtual key for Anyscale
)
3. Invoke Chat Completions with Anyscale
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'mistralai/Mistral-7B-Instruct-v0.1',
});
console.log(chatCompletion.choices);
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'mistralai/Mistral-7B-Instruct-v0.1'
)
print(completion.choices)
Directly Using Portkey's REST API
Alternatively, you can also directly call Anyscale models through Portkey's REST API - it works exactly the same as OpenAI API, with 2 differences:
You send your requests to Portkey's complete Gateway URL https://api.portkey.ai/v1/chat/completions
You have to add Portkey specific headers.
x-portkey-api-key for sending your Portkey API Key
x-portkey-virtual-key for sending your provider's virtual key (Alternatively, if you are not using Virtual keys, you can send your Auth header for your provider, and pass the x-portkey-provider header along with it)
You can also use the baseURL param in the standard OpenAI SDKs and make calls to Portkey + Anyscale directly from there. Like the Rest API example, you are only required to change the baseURL and add defaultHeaders to your instance. You can use the Portkey SDK to make it simpler:
import OpenAI from 'openai'; // We're using the v4 SDK
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'
const anyscale = new OpenAI({
apiKey: 'ANYSCALE_API_KEY',
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
provider: "anyscale",
apiKey: "PORTKEY_API_KEY" // defaults to process.env["PORTKEY_API_KEY"]
})
});
async function main() {
const chatCompletion = await anyscale.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'mistralai/Mistral-7B-Instruct-v0.1',
});
console.log(chatCompletion.choices);
}
main();
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
anyscale = OpenAI(
api_key="ANYSCALE_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY")
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
provider="anyscale",
api_key="PORTKEY_API_KEY" # defaults to os.environ.get("PORTKEY_API_KEY")
)
)
chat_complete = anyscale.chat.completions.create(
model="mistralai/Mistral-7B-Instruct-v0.1",
messages=[{"role": "user", "content": "Say this is a test"}],
)
print(chat_complete.choices[0].message.content)
This request will be automatically logged by Portkey. You can view this in your logs dashboard. Portkey logs the tokens utilized, execution time, and cost for each request. Additionally, you can delve into the details to review the precise request and response data.
Managing Anyscale Prompts
Creating Prompts
Use the Portkey prompt playground to set variables and try out various model params to get the right output.
Using Prompts
Deploy the prompts using the Portkey SDK or REST API
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
})
// Make the prompt creation call with the variables
const promptCompletion = await portkey.prompts.completions.create({
promptID: "YOUR_PROMPT_ID",
variables: {
//Required variables for prompt
}
})