Use Cases

Want your Convai characters to run on your LLM (for control, compliance, or cost reasons)? With Custom LLM integration on the Enterprise plan, you can register private, OpenAI-compatible endpoints and use them across your characters—right from the Core AI Settings dropdown in the Playground.
This guide shows you how to:
Watch the tutorial below to get a head start:

Convai expects your endpoint to speak the OpenAI API dialect—same style of REST paths and JSON payloads (e.g., base_url like https://…/v1, a model name, and chat/completions semantics).
Good news: many hosted providers and open-source stacks (e.g., vLLM, SGLang) expose OpenAI-compatible endpoints. Check your provider’s docs for “OpenAI-compatible API” or a compatibility mode.
Tip: If your endpoint isn’t OpenAI-compatible yet, deploy it behind a compatible gateway (many inference hosts offer this) before registering in Convai.
Prerequisites
All requests are POST with JSON and must include the header:
CONVAI-API-KEY: YOUR_API_KEY
Each response includes:
If you’re stuck at any point, please refer to the API Documentation.
Path: /llm-models/register
Required body: model_group_name, model_name, api_key, is_uncensored
Optional: display_name, base_url (defaults to https://api.openai.com/v1)
cURL (macOS/Linux):
curl -X POST https://api.convai.com/llm-models/register \
-H "Content-Type: application/json" \
-H "CONVAI-API-KEY: YOUR_API_KEY" \
-d '{
"model_group_name": "my-turbo",
"model_name": "gpt-4o-mini",
"api_key": "sk-proxy-123",
"is_uncensored": false,
"display_name": "Turbo (Private)",
"base_url": "https://api.openai.com/v1"
}'Windows CMD:
curl -X POST "https://api.convai.com/llm-models/register" ^
-H "Content-Type: application/json" ^
-H "CONVAI-API-KEY: YOUR_API_KEY" ^
-d "{\"model_group_name\":\"my-turbo\",\"model_name\":\"gpt-4o-mini\",\"api_key\":\"sk-proxy-123\",\"is_uncensored\":false,\"display_name\":\"Turbo (Private)\",\"base_url\":\"https://api.openai.com/v1\"}"Success (200):
{
"status": "success",
"model_group_name": "my-turbo",
"model_name": "gpt-4o-mini",
"display_name": "Turbo (Private)",
"message": "Model 'my-turbo' registered successfully",
"transactionID": "14b0cf96-5230-4b0f-a971-2f4f4f6d5e6a"
}Common errors:
Path: /llm-models/update
Required: model_group_name
Optional (≥1 required): display_name, base_url, api_key, is_uncensored
cURL:
curl -X POST https://api.convai.com/llm-models/update \
-H "Content-Type: application/json" \
-H "CONVAI-API-KEY: YOUR_API_KEY" \
-d '{ "model_group_name": "my-turbo",
"display_name": "Turbo v2",
"api_key": "sk-proxy-456"
}'Windows CMD:
curl -X POST "https://api.convai.com/llm-models/update" ^
-H "Content-Type: application/json" ^
-H "CONVAI-API-KEY: YOUR_API_KEY" ^
-d "{\"model_group_name\":\"my-turbo\",\"display_name\":\"Turbo v2\",\"api_key\":\"sk-proxy-456\"}"Success (200):
{
"status": "success",
"message": "Model 'my-turbo' updated successfully",
"updated_fields": ["display_name", "api_key"],
"transactionID": "5c0b6c67-97e4-4d78-9d5c-2a3de9d9c8ee"
}Common errors:
Path: /llm-models/deregister
Required: model_group_name
cURL:
curl -X POST https://api.convai.com/llm-models/deregister \
-H "Content-Type: application/json" \
-H "CONVAI-API-KEY: YOUR_API_KEY" \
-d '{ "model_group_name": "my-turbo" }'Windows CMD:
curl -X POST "https://api.convai.com/llm-models/deregister" ^
-H "Content-Type: application/json" ^
-H "CONVAI-API-KEY: YOUR_API_KEY" ^
-d "{\"model_group_name\":\"my-turbo\"}"Success (200):
{
"status": "success",
"message": "Model 'my-turbo' deregistered successfully",
"transactionID": "b11ac4f7-4cd3-4f43-8a79-3c942a14a8c9"
}Important: Before deregistering, switch all characters using this model to another model—or those characters will stop working.
Path: /llm-models/list
Body: (none)
cURL:
curl -X POST https://api.convai.com/llm-models/list \
-H "Content-Type: application/json" \
-H "CONVAI-API-KEY: YOUR_API_KEY"Windows CMD:
curl -X POST "https://api.convai.com/llm-models/list" ^
-H "Content-Type: application/json" ^
-H "CONVAI-API-KEY: YOUR_API_KEY"Success (200):
{
"status": "success",
"models": [
{
"model_group_name": "my-turbo",
"model_name": "gpt-4o-mini",
"display_name": "Turbo (Private)",
"base_url": "https://api.openai.com/v1",
"is_uncensored": false,
"category": "Private",
"created_at": "2025-07-08T09:41:38.123Z"
}
],
"count": 1,
"transactionID": "2caa4b99-eda9-46e2-a9ee-b4f251afcb1f"
}All errors follow:
{ "status": "error", "message": "Explanation", "transactionID": "…" }
Does this work with any LLM?
It works with endpoints that are OpenAI-compatible. Many hosted providers and OSS stacks support this. Confirm compatibility in your provider’s docs.
Can I have multiple private models?
Yes. Register as many as you need; use /llm-models/list to see them and pick the right one in Core AI Settings.
Will my private model appear in the UI?
Yes—after a successful register, refresh the Playground and check Core AI Settings → Model.
What happens if my remote endpoint is down?
Your characters using that model will fail to respond. Monitor your host and keep a fallback model ready.