Skip to content

Quick Start

Welcome to ZenMux! This guide will help you get started quickly and shows how to call the ZenMux API in three different ways.

💡 Get started in 4 steps

You can start using ZenMux in just four simple steps:

  1. Sign in to ZenMux: Visit the ZenMux login page and sign in using any of the following:

    • Email
    • GitHub account
    • Google account
  2. Get an API key: After signing in, go to your User Console > API Keys page and create a new API key.

  3. Choose an integration method: We recommend using the OpenAI SDK or Anthropic SDK in compatibility mode, or you can call the ZenMux API directly.

  4. Make your first request: Copy the code example below, replace your API key, and run it.


How to Obtain Model Slugs

Each model on the ZenMux platform has a unique slug. You can find model slugs on the Models page: model-slug Or on the model detail page: model-slug

Method 1: Using the OpenAI SDK

Compatibility

ZenMux endpoints are fully compatible with the OpenAI API. You only need to change two parameters to switch seamlessly.

Code Examples

python
from openai import OpenAI

# 1. Initialize the OpenAI client
client = OpenAI(
    # 2. Point the base URL to the ZenMux endpoint
    base_url="https://zenmux.ai/api/v1", 
    # 3. Replace with the API key from your ZenMux console
    api_key="<your ZENMUX_API_KEY>", 
)

# 4. Make the request
completion = client.chat.completions.create(
    # 5. Specify the model you want to use in the format "provider/model-name"
    model="openai/gpt-5", 
    messages=[
        {
            "role": "user",
            "content": "What is the meaning of life?"
        }
    ]
)

print(completion.choices[0].message.content)
ts
import OpenAI from "openai";

// 1. Initialize the OpenAI client
const openai = new OpenAI({
  // 2. Point the base URL to the ZenMux endpoint
  baseURL: "https://zenmux.ai/api/v1", 
  // 3. Replace with the API key from your ZenMux console
  apiKey: "<your ZENMUX_API_KEY>", 
});

async function main() {
  // 4. Make the request
  const completion = await openai.chat.completions.create({
    // 5. Specify the model you want to use in the format "provider/model-name"
    model: "openai/gpt-5", 
    messages: [
      {
        role: "user",
        content: "What is the meaning of life?", 
      },
    ],
  });

  console.log(completion.choices[0].message);
}

main();

Method 2: Using the Anthropic SDK

Compatibility

ZenMux fully supports the Anthropic API protocol and integrates seamlessly with tools like Claude Code and Cursor. You only need to change two parameters.

Note: For the Anthropic protocol, use base_url="https://zenmux.ai/api/anthropic".

Anthropic Protocol Model Support

Models compatible with the Anthropic protocol are being adapted in batches. You can view the currently supported models by filtering for Anthropic API Compatible on the official model list: anthropic-support You can also check on the model detail page: anthropic-support

Code Examples

python
from anthropic import Anthropic

# 1. Initialize the Anthropic client
client = Anthropic(
    # 2. Point the base URL to the ZenMux endpoint
    base_url="https://zenmux.ai/api/anthropic", 
    # 3. Replace with the API key from your ZenMux console
    api_key="<your ZENMUX_API_KEY>", 
)

# 4. Make the request
message = client.messages.create(
    # 5. Specify the model you want to use in the format "provider/model-name"
    model="anthropic/claude-sonnet-4.5", 
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "What is the meaning of life?"
        }
    ]
)

print(message.content[0].text)
ts
import Anthropic from "@anthropic-ai/sdk";

// 1. Initialize the Anthropic client
const client = new Anthropic({
  // 2. Point the base URL to the ZenMux endpoint
  baseURL: "https://zenmux.ai/api/anthropic", 
  // 3. Replace with the API key from your ZenMux console
  apiKey: "<your ZENMUX_API_KEY>", 
});

async function main() {
  // 4. Make the request
  const message = await client.messages.create({
    // 5. Specify the model you want to use in the format "provider/model-name"
    model: "anthropic/claude-sonnet-4.5", 
    max_tokens: 1024,
    messages: [
      {
        role: "user",
        content: "What is the meaning of life?", 
      },
    ],
  });

  console.log(message.content[0].text);
}

main();

Method 3: Calling the ZenMux API Directly

python
import httpx

# Prepare request data
api_key = "<your ZENMUX_API_KEY>"
headers = {
    "Authorization": f"Bearer {api_key}", 
}
payload = {
    "model": "openai/gpt-5", 
    "messages": [
        {
            "role": "user",
            "content": "What is the meaning of life?"
        }
    ]
}

# Send a POST request
response = httpx.post(
    "https://zenmux.ai/api/v1/chat/completions", 
    headers=headers,
    json=payload,
    timeout=httpx.Timeout(60.0)
)

# Optionally check whether the request succeeded
response.raise_for_status()

# Print the JSON response returned by the server
print(response.json())
typescript
fetch("https://zenmux.ai/api/v1/chat/completions", {
  method: "POST",
  headers: {
    Authorization: "Bearer <your ZENMUX_API_KEY>", 
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "openai/gpt-5", 
    messages: [
      {
        role: "user",
        content: "What is the meaning of life?", 
      },
    ],
  }),
})
  .then((response) => response.json())
  .then((data) => console.log(data))
  .catch((error) => console.error("Error:", error));
bash

curl https://zenmux.ai/api/v1/chat/completions
  -H "Content-Type: application/json"
  -H "Authorization: Bearer $ZENMUX_API_KEY"
  -d '{
    "model": "openai/gpt-5",
    "messages": [
      {
        "role": "user",
        "content": "What is the meaning of life?"
      }
    ]
  }'

Advanced Usage

For more details on advanced usage, refer to the Advanced Usage section.

Contact Us

If you encounter any issues during use or have suggestions and feedback, feel free to contact us:

For more contact options and details, please visit our Contact Us page.