语言模型
现在您的 OpenAI 客户端已配置为指向 TopRouter,您可以开始使用我们的开源模型之一进行推理使用。
例如,您可以使用我们的聊天模型之一,比如 tp.claude-sonnet-4-5-20250929:
Python
import os
import openai
client = openai.OpenAI(
api_key=os.environ.get("TOPROUTER_API_KEY"),
base_url="https://api.toprouter.ai/api/openai/v1",
)
response = client.chat.completions.create(
model="tp.claude-sonnet-4-5-20250929",
messages=[
{"role": "system", "content": "您是一位旅行代理。请描述得生动且有帮助。"},
{"role": "user", "content": "告诉我旧金山的三大活动"},
]
)
print(response.choices[0].message.content)
TypeScript
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.TOPROUTER_API_KEY,
baseURL: 'https://api.toprouter.ai/api/openai/v1',
});
const response = await client.chat.completions.create({
model: 'tp.claude-sonnet-4-5-20250929',
messages: [
{ role: 'user', content: '纽约有什么有趣的活动?' },
],
});
console.log(response.choices[0].message.content);
流式响应
您还可以使用 OpenAI 的流式传输功能来流式传输您的响应:
Python
import os
import openai
client = openai.OpenAI(
api_key=os.environ.get("TOPROUTER_API_KEY"),
base_url="https://api.toprouter.ai/api/openai/v1",
)
stream = client.chat.completions.create(
model="tp.claude-sonnet-4-5-20250929",
messages=[
{"role": "system", "content": "您是一位旅行代理。请描述得生动且有帮助。"},
{"role": "user", "content": "告诉我关于旧金山的一切"},
],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
TypeScript
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.TOPROUTER_API_KEY,
baseURL: 'https://api.toprouter.ai/api/openai/v1',
});
async function run() {
const stream = await client.chat.completions.create({
model: 'tp.claude-sonnet-4-5-20250929',
messages: [
{ role: 'system', content: '您是 AI 助手' },
{ role: 'user', content: '2020 年世界大赛的冠军是谁?' },
],
stream: true,
});
for await (const chunk of stream) {
// 使用 process.stdout.write 而不是 console.log 来避免换行
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
}
run();