Moderations
Create Moderation
Identify potentially harmful content in text and images. **Only** works with [OpenAI's Moderations endpoint](https://platform.openai.com/docs/guides/moderation) currently.
post
Authorizations
Body
inputone ofRequired
The input text to classify
stringOptionalDefault:
""
Example: I want to kill them.
string[]OptionalExample:
I want to kill them.
modelany ofOptionalDefault:
Two content moderations models are available: text-moderation-stable
and text-moderation-latest
.
The default is text-moderation-latest
which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use text-moderation-stable
, we will provide advanced notice before updating the model. Accuracy of text-moderation-stable
may be slightly lower than for text-moderation-latest
.
text-moderation-latest
Example: text-moderation-stable
stringOptional
string · enumOptionalPossible values:
Responses
200
OK
application/json
post
curl https://api.portkey.ai/v1/moderations \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: $PORTKEY_PROVIDER_VIRTUAL_KEY" \
-d '{
"input": "I want to kill them."
}'
200
OK
{
"id": "text",
"model": "text",
"results": [
{
"flagged": true,
"categories": {
"hate": true,
"hate/threatening": true,
"harassment": true,
"harassment/threatening": true,
"self-harm": true,
"self-harm/intent": true,
"self-harm/instructions": true,
"sexual": true,
"sexual/minors": true,
"violence": true,
"violence/graphic": true
},
"category_scores": {
"hate": 1,
"hate/threatening": 1,
"harassment": 1,
"harassment/threatening": 1,
"self-harm": 1,
"self-harm/intent": 1,
"self-harm/instructions": 1,
"sexual": 1,
"sexual/minors": 1,
"violence": 1,
"violence/graphic": 1
}
}
]
}
Last updated
Was this helpful?