Quick Start
ZenMux provides a unified API that is compatible with OpenAI.
💡 Get Started in Three Steps
Just three simple steps to start using ZenMux:
- Get an API Key: Go to your User Console > API Keys page and create a new API Key.
- Choose an integration method: We recommend using the OpenAI SDK in compatibility mode, or you can call the ZenMux API directly.
- Make your first request: Copy the code sample below, replace your API Key, and then run it.
Method 1: Use the OpenAI SDK (Recommended)
Compatibility Notes
ZenMux’s API endpoints are fully compatible with the OpenAI API. You can switch seamlessly by changing just two parameters.
Code Examples
python
from openai import OpenAI
# 1. Initialize the OpenAI client
client = OpenAI(
# 2. Point the base URL to the ZenMux endpoint
base_url="https://zenmux.ai/api/v1",
# 3. Replace with the API Key from your ZenMux user console
api_key="<your ZENMUX_API_KEY>",
)
# 4. Make a request
completion = client.chat.completions.create(
# 5. Specify the model to use in the format "provider/model-name"
model="openai/gpt-5",
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)
print(completion.choices[0].message.content)
ts
import OpenAI from "openai";
// 1. Initialize the OpenAI client
const openai = new OpenAI({
// 2. Point the base URL to the ZenMux endpoint
baseURL: "https://zenmux.ai/api/v1",
// 3. Replace with the API Key from your ZenMux user console
apiKey: "<your ZENMUX_API_KEY>",
});
async function main() {
// 4. Make a request
const completion = await openai.chat.completions.create({
// 5. Specify the model to use in the format "provider/model-name"
model: "openai/gpt-5",
messages: [
{
role: "user",
content: "What is the meaning of life?",
},
],
});
console.log(completion.choices[0].message);
}
main();
Method 2: Call the ZenMux API Directly
python
import httpx
# Prepare request data
api_key = "<your ZENMUX_API_KEY>"
headers = {
"Authorization": f"Bearer {api_key}",
}
payload = {
"model": "openai/gpt-5",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
}
# Send POST request
response = httpx.post(
"https://zenmux.ai/api/v1/chat/completions",
headers=headers,
json=payload,
timeout=httpx.Timeout(60.0)
)
# Check whether the request succeeded (optional)
response.raise_for_status()
# Print the JSON response returned by the server
print(response.json())
typescript
fetch("https://zenmux.ai/api/v1/chat/completions", {
method: "POST",
headers: {
Authorization: "Bearer <your ZENMUX_API_KEY>",
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "openai/gpt-5",
messages: [
{
role: "user",
content: "What is the meaning of life?",
},
],
}),
})
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error("Error:", error));
bash
curl https://zenmux.ai/api/v1/chat/completions
-H "Content-Type: application/json"
-H "Authorization: Bearer $ZENMUX_API_KEY"
-d '{
"model": "openai/gpt-5",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
}'
Model Selection
All models supported by ZenMux can be found in the official model list.
You can set the value of the model parameter by copying the exact model slug as shown below:
Advanced Usage
For more details on advanced usage, see the Advanced Calls section.