Submit Tool Outputs to Run

Supported Providers
  • OpenAI

When a run has the `status: "requires_action"` and `required_action.type` is `submit_tool_outputs`, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

post
Authorizations
Path parameters
thread_idstringRequired

The ID of the thread to which this run belongs.

run_idstringRequired

The ID of the run that requires the tool output submission.

Body
streamboolean | nullableOptional

If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message.

Responses
200

OK

application/json
post
curl https://api.portkey.ai/v1/threads/thread_123/runs/run_123/submit_tool_outputs \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" \
  -H "x-portkey-virtual-key: $PORTKEY_PROVIDER_VIRTUAL_KEY" \
  -H "Content-Type: application/json" \
  -H "OpenAI-Beta: assistants=v2" \
  -d '{
    "tool_outputs": [
      {
        "tool_call_id": "call_001",
        "output": "70 degrees and sunny."
      }
    ]
  }'
200

OK

{
  "id": "text",
  "object": "thread.run",
  "created_at": 1,
  "thread_id": "text",
  "assistant_id": "text",
  "status": "queued",
  "required_action": {
    "type": "submit_tool_outputs",
    "submit_tool_outputs": {
      "tool_calls": [
        {
          "id": "text",
          "type": "function",
          "function": {
            "name": "text",
            "arguments": "text"
          }
        }
      ]
    }
  },
  "last_error": {
    "code": "server_error",
    "message": "text"
  },
  "expires_at": 1,
  "started_at": 1,
  "cancelled_at": 1,
  "failed_at": 1,
  "completed_at": 1,
  "incomplete_details": {
    "reason": "max_completion_tokens"
  },
  "model": "text",
  "instructions": "text",
  "tools": [
    {
      "type": "code_interpreter"
    }
  ],
  "metadata": {},
  "usage": {
    "completion_tokens": 1,
    "prompt_tokens": 1,
    "total_tokens": 1
  },
  "temperature": 1,
  "top_p": 1,
  "max_prompt_tokens": 1,
  "max_completion_tokens": 1,
  "truncation_strategy": {
    "type": "auto",
    "last_messages": 1
  },
  "tool_choice": "none",
  "parallel_tool_calls": true,
  "response_format": "none"
}

Last updated

Was this helpful?