Make Your First Request
Integrate Portkey and analyze your first LLM call in 2 minutes!
Last updated
Was this helpful?
Integrate Portkey and analyze your first LLM call in 2 minutes!
Last updated
Was this helpful?
or to your Portkey account. Grab your account's API key from the "Settings" page.
Based on your access level, you might see the relevant permissions on the API key modal - tick the ones you'd like, name your API key, and save it.
Portkey offers a variety of integration options, including SDKs, REST APIs, and native connections with platforms like OpenAI, Langchain, and LlamaIndex, among others.
If you're using the OpenAI SDK, import the Portkey SDK and configure it within your OpenAI client object:
You can also use the Portkey SDK / REST APIs directly to make the chat completion calls. This is a more versatile way to make LLM calls across any provider:
Once, the integration is ready, you can view the requests reflect on your Portkey dashboard.
Now that you're up and running with Portkey, you can dive into the various Portkey features to learn about all of the supported functionalities:
Portkey is hosted on edge workers throughout the world and our servers ensure the least latency roundtrips. Our benchmarks estimate a total latency addition between 20-40ms.
Our edge worker locations:
Portkey is ISO:27001 and SOC 2 certified. We're also GDPR compliant. This is proof that we maintain the best practices involving security of our services, data storage and retrieval. All your data is encrypted in transit and at rest.
If you're still worried about your data passing through Portkey, we recommend one of the below options:
On request, we can enable a feature that does NOT store any of your request and response body objects in the Portkey datastores or our logs.
For enterprises, we offer managed hosting to deploy Portkey inside private clouds.
If you need to talk about these options, feel free to drop us a note on hello@portkey.ai
Portkey has been tested to handle millions of requests per second. We serve over 10M requests everyday with a 99.99% uptime. We're built of top of scalable infrastructure and can handle huge loads without breaking a sweat.
We do not impose any explicit timeout for our free OR paid plans currently. In the past, we have had users experience timeouts from various other frameworks, but Portkey does not time out requests on our end.
While you're here, why not ? It helps us a lot!
Yes! We support registrations with Microsoft accounts - this is currently in beta. Please reach out on support@portkey.ai or for access to MS login.
We're available all the time on , or on our support email - support@portkey.ai
Azure OpenAI
Anthropic
Langchain
LlamaIndex
Ollama
Others
Observability
AI Gateway
Prompt Library
Autonomous Fine-Tuning
Guardrails
Enterprise