Portkey Docs
HomeAPIIntegrationsChangelog
  • Introduction
    • What is Portkey?
    • Make Your First Request
    • Feature Overview
  • Integrations
    • LLMs
      • OpenAI
        • Structured Outputs
        • Prompt Caching
      • Anthropic
        • Prompt Caching
      • Google Gemini
      • Groq
      • Azure OpenAI
      • AWS Bedrock
      • Google Vertex AI
      • Bring Your Own LLM
      • AI21
      • Anyscale
      • Cerebras
      • Cohere
      • Fireworks
      • Deepbricks
      • Deepgram
      • Deepinfra
      • Deepseek
      • Google Palm
      • Huggingface
      • Inference.net
      • Jina AI
      • Lingyi (01.ai)
      • LocalAI
      • Mistral AI
      • Monster API
      • Moonshot
      • Nomic
      • Novita AI
      • Ollama
      • OpenRouter
      • Perplexity AI
      • Predibase
      • Reka AI
      • SambaNova
      • Segmind
      • SiliconFlow
      • Stability AI
      • Together AI
      • Voyage AI
      • Workers AI
      • ZhipuAI / ChatGLM / BigModel
      • Suggest a new integration!
    • Agents
      • Autogen
      • Control Flow
      • CrewAI
      • Langchain Agents
      • LlamaIndex
      • Phidata
      • Bring Your own Agents
    • Libraries
      • Autogen
      • DSPy
      • Instructor
      • Langchain (Python)
      • Langchain (JS/TS)
      • LlamaIndex (Python)
      • LibreChat
      • Promptfoo
      • Vercel
        • Vercel [Depricated]
  • Product
    • Observability (OpenTelemetry)
      • Logs
      • Tracing
      • Analytics
      • Feedback
      • Metadata
      • Filters
      • Logs Export
      • Budget Limits
    • AI Gateway
      • Universal API
      • Configs
      • Multimodal Capabilities
        • Image Generation
        • Function Calling
        • Vision
        • Speech-to-Text
        • Text-to-Speech
      • Cache (Simple & Semantic)
      • Fallbacks
      • Automatic Retries
      • Load Balancing
      • Conditional Routing
      • Request Timeouts
      • Canary Testing
      • Virtual Keys
        • Budget Limits
    • Prompt Library
      • Prompt Templates
      • Prompt Partials
      • Retrieve Prompts
      • Advanced Prompting with JSON Mode
    • Guardrails
      • List of Guardrail Checks
        • Patronus AI
        • Aporia
        • Pillar
        • Bring Your Own Guardrails
      • Creating Raw Guardrails (in JSON)
    • Autonomous Fine-tuning
    • Enterprise Offering
      • Org Management
        • Organizations
        • Workspaces
        • User Roles & Permissions
        • API Keys (AuthN and AuthZ)
      • Access Control Management
      • Budget Limits
      • Security @ Portkey
      • Logs Export
      • Private Cloud Deployments
        • Architecture
        • AWS
        • GCP
        • Azure
        • Cloudflare Workers
        • F5 App Stack
      • Components
        • Log Store
          • MongoDB
    • Open Source
    • Portkey Pro & Enterprise Plans
  • API Reference
    • Introduction
    • Authentication
    • OpenAPI Specification
    • Headers
    • Response Schema
    • Gateway Config Object
    • SDK
  • Provider Endpoints
    • Supported Providers
    • Chat
    • Embeddings
    • Images
      • Create Image
      • Create Image Edit
      • Create Image Variation
    • Audio
      • Create Speech
      • Create Transcription
      • Create Translation
    • Fine-tuning
      • Create Fine-tuning Job
      • List Fine-tuning Jobs
      • Retrieve Fine-tuning Job
      • List Fine-tuning Events
      • List Fine-tuning Checkpoints
      • Cancel Fine-tuning
    • Batch
      • Create Batch
      • List Batch
      • Retrieve Batch
      • Cancel Batch
    • Files
      • Upload File
      • List Files
      • Retrieve File
      • Retrieve File Content
      • Delete File
    • Moderations
    • Assistants API
      • Assistants
        • Create Assistant
        • List Assistants
        • Retrieve Assistant
        • Modify Assistant
        • Delete Assistant
      • Threads
        • Create Thread
        • Retrieve Thread
        • Modify Thread
        • Delete Thread
      • Messages
        • Create Message
        • List Messages
        • Retrieve Message
        • Modify Message
        • Delete Message
      • Runs
        • Create Run
        • Create Thread and Run
        • List Runs
        • Retrieve Run
        • Modify Run
        • Submit Tool Outputs to Run
        • Cancel Run
      • Run Steps
        • List Run Steps
        • Retrieve Run Steps
    • Completions
    • Gateway for Other API Endpoints
  • Portkey Endpoints
    • Configs
      • Create Config
      • List Configs
      • Retrieve Config
      • Update Config
    • Feedback
      • Create Feedback
      • Update Feedback
    • Guardrails
    • Logs
      • Insert a Log
      • Log Exports [BETA]
        • Retrieve a Log Export
        • Update a Log Export
        • List Log Exports
        • Create a Log Export
        • Start a Log Export
        • Cancel a Log Export
        • Download a Log Export
    • Prompts
      • Prompt Completion
      • Render
    • Virtual Keys
      • Create Virtual Key
      • List Virtual Keys
      • Retrieve Virtual Key
      • Update Virtual Key
      • Delete Virtual Key
    • Analytics
      • Graphs - Time Series Data
        • Get Requests Data
        • Get Cost Data
        • Get Latency Data
        • Get Tokens Data
        • Get Users Data
        • Get Requests per User
        • Get Errors Data
        • Get Error Rate Data
        • Get Status Code Data
        • Get Unique Status Code Data
        • Get Rescued Requests Data
        • Get Cache Hit Rate Data
        • Get Cache Hit Latency Data
        • Get Feedback Data
        • Get Feedback Score Distribution Data
        • Get Weighted Feeback Data
        • Get Feedback Per AI Models
      • Summary
        • Get All Cache Data
      • Groups - Paginated Data
        • Get User Grouped Data
        • Get Model Grouped Data
        • Get Metadata Grouped Data
    • API Keys [BETA]
      • Update API Key
      • Create API Key
      • Delete an API Key
      • Retrieve an API Key
      • List API Keys
    • Admin
      • Users
        • Retrieve a User
        • Retrieve All Users
        • Update a User
        • Remove a User
      • User Invites
        • Invite a User
        • Retrieve an Invite
        • Retrieve All User Invites
        • Delete a User Invite
      • Workspaces
        • Create Workspace
        • Retrieve All Workspaces
        • Retrieve a Workspace
        • Update Workspace
        • Delete a Workspace
      • Workspace Members
        • Add a Workspace Member
        • Retrieve All Workspace Members
        • Retrieve a Workspace Member
        • Update Workspace Member
        • Remove Workspace Member
  • Guides
    • Getting Started
      • A/B Test Prompts and Models
      • Tackling Rate Limiting
      • Function Calling
      • Image Generation
      • Getting started with AI Gateway
      • Llama 3 on Groq
      • Return Repeat Requests from Cache
      • Trigger Automatic Retries on LLM Failures
      • 101 on Portkey's Gateway Configs
    • Integrations
      • Llama 3 on Portkey + Together AI
      • Introduction to GPT-4o
      • Anyscale
      • Mistral
      • Vercel AI
      • Deepinfra
      • Groq
      • Langchain
      • Mixtral 8x22b
      • Segmind
    • Use Cases
      • Few-Shot Prompting
      • Enforcing JSON Schema with Anyscale & Together
      • Detecting Emotions with GPT-4o
      • Build an article suggestion app with Supabase pgvector, and Portkey
      • Setting up resilient Load balancers with failure-mitigating Fallbacks
      • Run Portkey on Prompts from Langchain Hub
      • Smart Fallback with Model-Optimized Prompts
      • How to use OpenAI SDK with Portkey Prompt Templates
      • Setup OpenAI -> Azure OpenAI Fallback
      • Fallback from SDXL to Dall-e-3
      • Comparing Top10 LMSYS Models with Portkey
      • Build a chatbot using Portkey's Prompt Templates
  • Support
    • Contact Us
    • Developer Forum
    • Common Errors & Resolutions
    • December '23 Migration
    • Changelog
Powered by GitBook
On this page
  • Getting Started
  • 1. Installation
  • 2. Import & Configure Portkey Object
  • Using Vercel Functions
  • Tool Calling with Portkey
  • Portkey Features
  • Interoperability
  • Observability
  • Reliability
  • Guardrails
  • Portkey Config

Was this helpful?

Edit on GitHub
  1. Integrations
  2. Libraries

Vercel

PreviousPromptfooNextVercel [Depricated]

Last updated 9 months ago

Was this helpful?

Portkey natively integrates with the Vercel AI SDK to make your apps production-ready and reliable. Just import Portkey's Vercel package and use it as a provider in your Vercel AI app to enable all of Portkey features:

  • Full-stack observability and tracing for all requests

  • Interoperability across 250+ LLMS

  • Built-in 50+ SOTA guardrails

  • Simple & semantic caching to save costs & time

  • Route requests conditionally and make them robust with fallbacks, load-balancing, automatic retries, and more

  • Continuous improvement based on user feedback

Getting Started

1. Installation

npm install @portkey-ai/vercel-provider

2. Import & Configure Portkey Object

and get your API key, and configure Portkey provider in your Vercel app:

import { createPortkey } from '@portkey-ai/vercel-provider';

const portkeyConfig = {
      "provider": "openai", // Choose your provider (e.g., 'anthropic')
      "api_key": "OPENAI_API_KEY",
      "override_params": {
          "model": "gpt-4o" // Select from 250+ models
        }
};

const portkey = createPortkey({
  apiKey: 'YOUR_PORTKEY_API_KEY',
  config: portkeyConfig,
});

Using Vercel Functions

Portkey provider works with all of Vercel functions generateText & streamText.

Here's how to use them with Portkey:

import { createPortkey } from '@portkey-ai/vercel-provider';
import { generateText } from 'ai';

const portkeyConfig = {
      "provider": "openai", // Choose your provider (e.g., 'anthropic')
      "api_key": "OPENAI_API_KEY",
      "override_params": {
          "model": "gpt-4o"
        }
};

const portkey = createPortkey({
  apiKey: 'YOUR_PORTKEY_API_KEY',
  config: portkeyConfig,
});

const { text } = await generateText({
  model: portkey.chatModel(''), // Provide an empty string, we defined the model in the config
  prompt: 'What is Portkey?',
});

console.log(text);
import { createPortkey } from '@portkey-ai/vercel-provider';
import { streamText } from 'ai';

const portkeyConfig = {
      "provider": "openai", // Choose your provider (e.g., 'anthropic')
      "api_key": "OPENAI_API_KEY",
      "override_params": {
        "model": "gpt-4o" // Select from 250+ models
  } 
};

const portkey = createPortkey({
  apiKey: 'YOUR_PORTKEY_API_KEY',
  config: portkeyConfig,
});

const result = await streamText({
  model: portkey('gpt-4-turbo'), // This gets overwritten by config
  prompt: 'Invent a new holiday and describe its traditions.',
});

for await (const chunk of result) {
  console.log(chunk);
} 

Portkey supports chatModel and completionModel to easily handle chatbots or text completions. In the above examples, we used portkey.chatModel for generateText and portkey.completionModel for streamText.

Tool Calling with Portkey

Portkey supports Tool calling with Vercel AI SDK. Here's how-

import { z } from 'zod';
import { generateText, tool } from 'ai';

const result = await generateText({
  model: portkey.chatModel('gpt-4-turbo'),
  tools: {
    weather: tool({
      description: 'Get the weather in a location',
      parameters: z.object({
        location: z.string().describe('The location to get the weather for'),
      }),
      execute: async ({ location }) => ({
        location,
        temperature: 72 + Math.floor(Math.random() * 21) - 10,
      }),
    }),
  },
  prompt: 'What is the weather in San Francisco?',
});

Portkey Features

Portkey Helps you make your Vercel app more robust and reliable. The portkey config is a modular way to make it work for you in whatever way you want.

Portkey allows you to easily switch between 250+ AI models by simply changing the model name in your configuration. This flexibility enables you to adapt to the evolving AI landscape without significant code changes.

Here's how you'd use OpenAI with Portkey's Vercel integration:

const portkeyConfig = {
      "provider": "openai",
      "api_key": "OPENAI_API_KEY",
      "override_params": {
          model: "gpt-4o"
        }
};

Now, to switch to Anthropic, just change your provider slug to anthropic and enter your Anthropic API key along with the model of choice:

const portkeyConfig = {
      "provider": "anthropic",
      "api_key": "Anthropic_API_KEY",
      "override_params": {
          "model": "claude-3-5-sonnet-20240620"
        }
      
};

Portkey's OpenTelemetry-compliant observability suite gives you complete control over all your requests. And Portkey's analytics dashboards provide 40+ key insights you're looking for including cost, tokens, latency, etc. Fast.

Reliability

Here is how you can modify your config to include the following Portkey features-

import { createPortkey } from '@portkey-ai/vercel-provider';
import { generateText } from 'ai';

const portkeyConfig =  {
	"strategy": {
		"mode": "fallback"
	},
	"targets": [
		{ 
		"provider": "anthropic",
	      	"api_key": "Anthropic_API_KEY",
	      	"override_params": {
	          "model": "claude-3-5-sonnet-20240620"
	        } },
		{
		"provider": "openai",
     	 	"api_key": "OPENAI_API_KEY",
      		"override_params": {
          		"model": "gpt-4o"
        } }
	]
}

const portkey = createPortkey({
  apiKey: 'YOUR_PORTKEY_API_KEY',
  config: portkeyConfig,
});

const { text } = await generateText({
  model: portkey.chatModel(''),
  prompt: 'What is Portkey?',
});

console.log(text);
import { createPortkey } from '@portkey-ai/vercel-provider';
import { generateText } from 'ai';

const portkeyConfig = { "cache": { "mode": "semantic" } }

const portkey = createPortkey({
  apiKey: 'YOUR_PORTKEY_API_KEY',
  config: portkeyConfig,
});

const { text } = await generateText({
  model: portkey.chatModel(''),
  prompt: 'What is Portkey?',
});

console.log(text);
const portkey_config =  {
	"strategy": {
		"mode": "conditional",
		"conditions": [
			...conditions
		],
		"default": "target_1"
	},
	"targets": [
		{
			"name": "target_1",
			"provider": "anthropic",
		      	"api_key": "Anthropic_API_KEY",
		      	"override_params": {
		          "model": "claude-3-5-sonnet-20240620"
		        }
		},
		{
			"name": "target_2",
			"provider": "openai",
		      	"api_key": "OpenAI_api_key",
		      	"override_params": {
		          "model": "gpt-4o"
		        }
		}
	]
}

Portkey Guardrails allow you to enforce LLM behavior in real-time, verifying both inputs and outputs against specified checks.

You can create Guardrail checks in UI and then pass them in your Portkey Configs with before request or after request hooks.


Many of these features are driven by Portkey's Config architecture. The Portkey app simplifies creating, managing, and versioning your Configs.

Portkey's configs are a powerful way to manage & govern your app's behaviour. Learn more about Configs .

Portkey enhances the robustness of your AI applications with built-in features such as , mechanisms, , , , etc.

Learn more about Portkey's AI gateway features in detail .

.

For more information on using these features and setting up your Config, please refer to the .

Sign up for Portkey
here
Interoperability
Observability
Caching
Fallback
Load balancing
Conditional routing
Request timeouts
here
Read more about Guardrails here
Portkey Config
Portkey documentation
Guardrails
Portkey's Observability Dashboard