Bring Your Own LLM to Convai: How to Integrate Custom Models

By
Convai Team
November 28, 2025

Want your Convai characters to run on your LLM (for control, compliance, or cost reasons)? With Custom LLM integration on the Enterprise plan, you can register private, OpenAI-compatible endpoints and use them across your characters—right from the Core AI Settings dropdown in the Playground.

This guide shows you how to:

  • Register a private LLM
  • Update it
  • Deregister it
  • List everything linked to your key
  • And switch your avatar to the new model

Watch the tutorial below to get a head start:

What “OpenAI-compatible” means (and why it matters)

Convai expects your endpoint to speak the OpenAI API dialect—same style of REST paths and JSON payloads (e.g., base_url like https://…/v1, a model name, and chat/completions semantics).
Good news: many hosted providers and open-source stacks (e.g., vLLM, SGLang) expose OpenAI-compatible endpoints. Check your provider’s docs for “OpenAI-compatible API” or a compatibility mode.

Tip: If your endpoint isn’t OpenAI-compatible yet, deploy it behind a compatible gateway (many inference hosts offer this) before registering in Convai.

Quick start (Enterprise plan)

Prerequisites

  • Your Convai API key (from Playground).
  • Your LLM endpoint base URL (OpenAI-compatible).
  • Your API key (secret key) for that endpoint (will be stored by Convai to call your model).
  • A unique model_group_name (immutable after registration).
  • The model_name your endpoint expects (e.g., gpt-4o-mini, mixtral-8x22b-base, gemini-2.5-flash).
  • Optional: display_name (friendly label shown in UI).
  • is_uncensored flag (boolean) describing the endpoint’s moderation posture.

Endpoints (Base: https://api.convai.com)

All requests are POST with JSON and must include the header:
CONVAI-API-KEY: YOUR_API_KEY

Each response includes:

  • status: "success" | "error"
  • message: string
  • transactionID: string
  • Plus endpoint-specific fields

If you’re stuck at any point, please refer to the API Documentation.

1) Register a model

Path: /llm-models/register
Required body: model_group_name, model_name, api_key, is_uncensored
Optional: display_name, base_url (defaults to https://api.openai.com/v1)

cURL (macOS/Linux):

curl -X POST https://api.convai.com/llm-models/register \  
-H "Content-Type: application/json" \  
-H "CONVAI-API-KEY: YOUR_API_KEY" \  
-d '{        
	     "model_group_name": "my-turbo",       
             "model_name": "gpt-4o-mini",        
             "api_key": "sk-proxy-123",        
             "is_uncensored": false,        
             "display_name": "Turbo (Private)",        
             "base_url": "https://api.openai.com/v1"     
    }'

Windows CMD:

curl -X POST "https://api.convai.com/llm-models/register" ^  
-H "Content-Type: application/json" ^  
-H "CONVAI-API-KEY: YOUR_API_KEY" ^  
-d "{\"model_group_name\":\"my-turbo\",\"model_name\":\"gpt-4o-mini\",\"api_key\":\"sk-proxy-123\",\"is_uncensored\":false,\"display_name\":\"Turbo (Private)\",\"base_url\":\"https://api.openai.com/v1\"}"

Success (200):

{  
"status": "success",  
"model_group_name": "my-turbo""model_name": "gpt-4o-mini",  
"display_name": "Turbo (Private)",  
"message": "Model 'my-turbo' registered successfully",  
"transactionID": "14b0cf96-5230-4b0f-a971-2f4f4f6d5e6a"
}

Common errors:

  • 400 Missing required field
  • 409 Duplicate model_group_name
  • 500 Internal error

2) Update a model

Path: /llm-models/update
Required: model_group_name
Optional (≥1 required): display_name, base_url, api_key, is_uncensored

cURL:

curl -X POST https://api.convai.com/llm-models/update \ 
-H "Content-Type: application/json" \  
-H "CONVAI-API-KEY: YOUR_API_KEY" \  
-d '{        "model_group_name": "my-turbo",
	     "display_name": "Turbo v2", 
	     "api_key": "sk-proxy-456"     
}'

Windows CMD:

curl -X POST "https://api.convai.com/llm-models/update" ^  
-H "Content-Type: application/json" ^  
-H "CONVAI-API-KEY: YOUR_API_KEY" ^  
-d "{\"model_group_name\":\"my-turbo\",\"display_name\":\"Turbo v2\",\"api_key\":\"sk-proxy-456\"}"

Success (200):

"status": "success",  
"message": "Model 'my-turbo' updated successfully",  
"updated_fields": ["display_name", "api_key"],  
"transactionID": "5c0b6c67-97e4-4d78-9d5c-2a3de9d9c8ee"
}

Common errors:

  • 404 Not found / not owned

  • 409 display_name already exists

  • 400 No valid fields to update

3) Deregister a model

Path: /llm-models/deregister
Required: model_group_name

cURL:

curl -X POST https://api.convai.com/llm-models/deregister \  
-H "Content-Type: application/json" \  
-H "CONVAI-API-KEY: YOUR_API_KEY" \  
-d '{ "model_group_name": "my-turbo" }'

Windows CMD:

curl -X POST "https://api.convai.com/llm-models/deregister" ^  
-H "Content-Type: application/json" ^  
-H "CONVAI-API-KEY: YOUR_API_KEY"-d "{\"model_group_name\":\"my-turbo\"}"

Success (200):

{  
"status": "success",  
"message": "Model 'my-turbo' deregistered successfully",  
"transactionID": "b11ac4f7-4cd3-4f43-8a79-3c942a14a8c9"
}

Important: Before deregistering, switch all characters using this model to another model—or those characters will stop working.

4) List all models

Path: /llm-models/list
Body: (none)

cURL:

curl -X POST https://api.convai.com/llm-models/list \  
-H "Content-Type: application/json" \  
-H "CONVAI-API-KEY: YOUR_API_KEY"

Windows CMD:

curl -X POST "https://api.convai.com/llm-models/list"-H "Content-Type: application/json"-H "CONVAI-API-KEY: YOUR_API_KEY"

Success (200):

{  
"status": "success",  
"models": [    
    		{     
        	     "model_group_name": "my-turbo"                     "model_name": "gpt-4o-mini",
                     "display_name": "Turbo (Private)"                     "base_url": "https://api.openai.com/v1",
                     "is_uncensored": false,     
                     "category": "Private",
                     "created_at": "2025-07-08T09:41:38.123Z"
                 }  
           ],  
		"count": 1                "transactionID": "2caa4b99-eda9-46e2-a9ee-b4f251afcb1f"
}

Switch your avatar to the private model (Playground)

  1. Open any character in Playground.
  2. Go to Core AI SettingsModel dropdown.
  3. Pick your display_name (e.g., Turbo v2).
  4. Save and test—your character now runs on your private LLM.

Error reference (all endpoints)

HTTP Typical reason
400 Bad Request Missing/malformed parameters; wrong types
401 Unauthorized CONVAI-API-KEY header missing/invalid
404 Not Found Model doesn't exist or isn't owned by you
409 Conflict Duplicate model_group_name or display_name
429 Too Many Requests Rate limit exceeded
500 Internal Server Error Unhandled exception/database failure

All errors follow:

{ "status": "error", "message": "Explanation", "transactionID": "…" }

Best practices (naming, security, safety)

  • Choose model_group_name carefully (immutable). Prefer a unique, org-scoped ID (e.g., llama3.1-70b-acme-prod), not a generic model name.

  • Use environment variables for secrets (CONVAI_API_KEY, PRIVATE_LLM_KEY); avoid hard-coding.

  • Rotate the remote api_key periodically and update via /llm-models/update.

  • Set is_uncensored truthfully. This informs Convai’s safety posture; mislabeling may lead to policy misalignment.

  • Before /deregister, migrate characters to a different model.

  • Validate OpenAI-compatibility on your endpoint (paths, payloads, auth) before registering to avoid 4xx errors.

  • Log transactionID from responses to trace and debug support requests.

Does this work with any LLM?
It works with endpoints that are OpenAI-compatible. Many hosted providers and OSS stacks support this. Confirm compatibility in your provider’s docs.

Can I have multiple private models?
Yes. Register as many as you need; use /llm-models/list to see them and pick the right one in Core AI Settings.

Will my private model appear in the UI?
Yes—after a successful register, refresh the Playground and check Core AI Settings → Model.

What happens if my remote endpoint is down?
Your characters using that model will fail to respond. Monitor your host and keep a fallback model ready.