API 直接调用教程

通过 HTTP 请求直接调用 XiDao Api,支持 curl、Python、JavaScript 等各种方式

Overview

XiDao Api is fully compatible with OpenAI API format. You can use any HTTP client to call directly. Here is the basic info:

Base URL
# 国内节点
https://api.xidao.online/v1

# 全球加速节点
https://global.xidao.online/v1
🔑
Authentication Add in HTTP Header:
Authorization: Bearer sk-你的API_KEY

支持的 API 端点

EndpointMethodDescription
/v1/chat/completionsPOSTChat Completions (most common)
/v1/modelsGETGet available model list
/v1/embeddingsPOSTText vectorization
/v1/images/generationsPOSTImage generation
/v1/audio/speechPOSTText-to-Speech (TTS)
/v1/audio/transcriptionsPOSTSpeech-to-Text (STT)

cURL 调用示例

Basic Chat

bash
curl https://api.xidao.online/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-你的API_KEY" \
  -d '{
    "model": "claude-sonnet-4-5-20250929",
    "messages": [
      {"role": "system", "content": "你是一个有帮助的助手。"},
      {"role": "user", "content": "你好,请介绍一下你自己。"}
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  }'

GPT Model Call

bash
curl https://api.xidao.online/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-你的API_KEY" \
  -d '{
    "model": "gpt-5",
    "messages": [
      {"role": "user", "content": "用 Python 写一个快速排序"}
    ]
  }'

Get Model List

bash
curl https://api.xidao.online/v1/models \
  -H "Authorization: Bearer sk-你的API_KEY"

Python 调用示例

Using openai Library (Recommended)

python
from openai import OpenAI

client = OpenAI(
    api_key="sk-你的API_KEY",
    base_url="https://api.xidao.online/v1"
)

# 基本对话
response = client.chat.completions.create(
    model="claude-sonnet-4-5-20250929",
    messages=[
        {"role": "system", "content": "你是一个有帮助的编程助手"},
        {"role": "user", "content": "解释一下什么是递归"}
    ]
)

print(response.choices[0].message.content)

Streaming Output

python
from openai import OpenAI

client = OpenAI(
    api_key="sk-你的API_KEY",
    base_url="https://api.xidao.online/v1"
)

stream = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "写一首关于编程的诗"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

print()

Using requests Library (Native HTTP)

python
import requests

response = requests.post(
    "https://api.xidao.online/v1/chat/completions",
    headers={
        "Content-Type": "application/json",
        "Authorization": "Bearer sk-你的API_KEY"
    },
    json={
        "model": "gemini-2.5-flash",
        "messages": [{"role": "user", "content": "Hello!"}]
    }
)

data = response.json()
print(data["choices"][0]["message"]["content"])

Node.js / JavaScript 调用示例

Using openai Library

javascript
import OpenAI from 'openai';

const client = new OpenAI({
    apiKey: 'sk-你的API_KEY',
    baseURL: 'https://api.xidao.online/v1'
});

const response = await client.chat.completions.create({
    model: 'claude-sonnet-4-5-20250929',
    messages: [
        { role: 'system', content: '你是有帮助的助手' },
        { role: 'user', content: '什么是 REST API?' }
    ]
});

console.log(response.choices[0].message.content);

Using fetch (No Dependencies)

javascript
const response = await fetch('https://api.xidao.online/v1/chat/completions', {
    method: 'POST',
    headers: {
        'Content-Type': 'application/json',
        'Authorization': 'Bearer sk-你的API_KEY'
    },
    body: JSON.stringify({
        model: 'gpt-5',
        messages: [{ role: 'user', content: 'Hello!' }]
    })
});

const data = await response.json();
console.log(data.choices[0].message.content);

Streaming Output (SSE)

Streaming output allows AI responses to display character by character, improving user experience.

Python Streaming Example

python
from openai import OpenAI

client = OpenAI(
    api_key="sk-你的API_KEY",
    base_url="https://api.xidao.online/v1"
)

stream = client.chat.completions.create(
    model="claude-sonnet-4-5-20250929",
    messages=[{"role": "user", "content": "详细解释机器学习"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="", flush=True)

Node.js Streaming Example

javascript
import OpenAI from 'openai';

const client = new OpenAI({
    apiKey: 'sk-你的API_KEY',
    baseURL: 'https://api.xidao.online/v1'
});

const stream = await client.chat.completions.create({
    model: 'gpt-5',
    messages: [{ role: 'user', content: '讲一个故事' }],
    stream: true
});

for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Embeddings (Text Vectorization)

Convert text to vector representation for semantic search, RAG, clustering, etc.

python
from openai import OpenAI

client = OpenAI(
    api_key="sk-你的API_KEY",
    base_url="https://api.xidao.online/v1"
)

response = client.embeddings.create(
    model="text-embedding-3-small",
    input="人工智能正在改变世界"
)

vector = response.data[0].embedding
print(f"向量维度: {len(vector)}")
print(f"前5个值: {vector[:5]}")

Image Generation

Generate images via API using DALL-E 3 or Midjourney.

python
from openai import OpenAI

client = OpenAI(
    api_key="sk-你的API_KEY",
    base_url="https://api.xidao.online/v1"
)

response = client.images.generate(
    model="dall-e-3",
    prompt="一只穿着宇航服的猫在月球上看地球",
    size="1024x1024",
    quality="standard"
)

image_url = response.data[0].url
print(f"图片 URL: {image_url}")

错误处理

python
from openai import OpenAI, AuthenticationError, RateLimitError, APIError

client = OpenAI(
    api_key="sk-你的API_KEY",
    base_url="https://api.xidao.online/v1"
)

try:
    response = client.chat.completions.create(
        model="claude-sonnet-4-5-20250929",
        messages=[{"role": "user", "content": "Hello"}]
    )
    print(response.choices[0].message.content)
    
except AuthenticationError:
    print("错误: API Key 无效")
except RateLimitError:
    print("错误: 请求过于频繁,请稍后重试")
except APIError as e:
    print(f"API 错误: {e.message}")

Common Error Codes

Status CodeMeaningSolution
401Authentication failedCheck if API Key is correct
402Insufficient balanceTop up and retry
404Model not foundCheck if model ID is correct
429Rate limit exceededReduce request frequency
500Server errorRetry later or contact support