初始化大模型API中转系统
功能特点: - 多提供商支持: 配置多个上游大模型提供商 - 优先级调度: 按优先级自动选择可用提供商 - OpenAI API兼容: 完全兼容OpenAI API格式 - 故障切换: 自动切换到备用提供商 - 流式支持: 支持流式和非流式响应 - 模型别名: 支持模型别名映射 - 健康检查: 自动健康检查和熔断 上游配置: 1. [高优先] Local Qwen: http://192.168.2.5:1234/v1 (qwen3.5-4b) 2. [低优先] SiliconFlow: https://api.siliconflow.cn/v1 (DeepSeek-V3.2) 支持的模型: - auto: 自动选择可用模型 - qwen3.5-4b, qwen3.5, qwen - deepseek-v3, deepseek-v3.2, deepseek 端口: 19007
This commit is contained in:
202
README.md
Normal file
202
README.md
Normal file
@@ -0,0 +1,202 @@
|
||||
# 大模型API中转系统
|
||||
|
||||
> 兼容OpenAI API格式的多提供商代理系统,支持优先级自动切换
|
||||
|
||||
## 功能特点
|
||||
|
||||
### 🔄 多提供商支持
|
||||
- 支持配置多个上游大模型提供商
|
||||
- 按优先级自动选择可用提供商
|
||||
- 故障自动切换到备用提供商
|
||||
|
||||
### 📡 OpenAI API 兼容
|
||||
- 完全兼容 OpenAI API 格式
|
||||
- 支持 Chat Completions API
|
||||
- 支持 Embeddings API
|
||||
- 支持流式和非流式响应
|
||||
|
||||
### 🎯 智能路由
|
||||
- `auto` 模型自动选择可用提供商
|
||||
- 支持模型别名映射
|
||||
- 请求参数自动适配
|
||||
|
||||
### 🛡️ 高可用
|
||||
- 自动健康检查
|
||||
- 错误计数与熔断
|
||||
- 自动重试机制
|
||||
|
||||
## 快速开始
|
||||
|
||||
### 安装依赖
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 启动服务
|
||||
|
||||
```bash
|
||||
python app.py
|
||||
```
|
||||
|
||||
### 访问地址
|
||||
|
||||
```
|
||||
http://localhost:19007
|
||||
```
|
||||
|
||||
## API 使用
|
||||
|
||||
### Chat Completions
|
||||
|
||||
```bash
|
||||
curl http://localhost:19007/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer any-key" \
|
||||
-d '{
|
||||
"model": "auto",
|
||||
"messages": [{"role": "user", "content": "Hello!"}],
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
### 列出模型
|
||||
|
||||
```bash
|
||||
curl http://localhost:19007/v1/models
|
||||
```
|
||||
|
||||
### 流式响应
|
||||
|
||||
```bash
|
||||
curl http://localhost:19007/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "qwen3.5-4b",
|
||||
"messages": [{"role": "user", "content": "Hello!"}],
|
||||
"stream": true
|
||||
}'
|
||||
```
|
||||
|
||||
## 配置说明
|
||||
|
||||
编辑 `config/settings.py`:
|
||||
|
||||
```python
|
||||
UPSTREAM_PROVIDERS = [
|
||||
{
|
||||
"name": "provider-name",
|
||||
"priority": 1, # 优先级,数字越小越高
|
||||
"base_url": "https://api.example.com/v1",
|
||||
"api_key": "sk-xxx",
|
||||
"models": ["model-1", "model-2"],
|
||||
"default_model": "model-1",
|
||||
"timeout": 120,
|
||||
"enabled": True,
|
||||
},
|
||||
]
|
||||
```
|
||||
|
||||
### 模型别名
|
||||
|
||||
```python
|
||||
MODEL_ALIASES = {
|
||||
"auto": "auto", # 自动选择
|
||||
"gpt-4": "actual-model", # 别名映射
|
||||
}
|
||||
```
|
||||
|
||||
## 端点
|
||||
|
||||
| 端点 | 方法 | 说明 |
|
||||
|------|------|------|
|
||||
| `/` | GET | 服务信息 |
|
||||
| `/v1/chat/completions` | POST | 聊天完成 |
|
||||
| `/v1/embeddings` | POST | 文本嵌入 |
|
||||
| `/v1/models` | GET | 模型列表 |
|
||||
| `/health` | GET | 健康检查 |
|
||||
| `/status` | GET | 详细状态 |
|
||||
|
||||
## 使用示例
|
||||
|
||||
### Python (OpenAI SDK)
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI(
|
||||
base_url="http://localhost:19007/v1",
|
||||
api_key="any-key"
|
||||
)
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="auto",
|
||||
messages=[
|
||||
{"role": "user", "content": "你好!"}
|
||||
]
|
||||
)
|
||||
|
||||
print(response.choices[0].message.content)
|
||||
```
|
||||
|
||||
### 流式响应
|
||||
|
||||
```python
|
||||
stream = client.chat.completions.create(
|
||||
model="qwen3.5-4b",
|
||||
messages=[{"role": "user", "content": "讲个笑话"}],
|
||||
stream=True
|
||||
)
|
||||
|
||||
for chunk in stream:
|
||||
if chunk.choices[0].delta.content:
|
||||
print(chunk.choices[0].delta.content, end="")
|
||||
```
|
||||
|
||||
## 优先级机制
|
||||
|
||||
当使用 `model="auto"` 时:
|
||||
|
||||
1. 按配置的优先级顺序选择提供商
|
||||
2. 跳过不可用的提供商
|
||||
3. 请求失败自动切换到下一个提供商
|
||||
4. 连续失败3次的提供商暂时标记为不可用
|
||||
|
||||
## 监控
|
||||
|
||||
### 健康检查
|
||||
|
||||
```bash
|
||||
curl http://localhost:19007/health
|
||||
```
|
||||
|
||||
### 详细状态
|
||||
|
||||
```bash
|
||||
curl http://localhost:19007/status
|
||||
```
|
||||
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
llm-proxy/
|
||||
├── app.py # 主程序
|
||||
├── requirements.txt # 依赖
|
||||
├── config/
|
||||
│ └── settings.py # 配置
|
||||
├── logs/ # 日志目录
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## 版本历史
|
||||
|
||||
### v0.1.0 (2026-04-08)
|
||||
- 初始版本
|
||||
- 多提供商支持
|
||||
- OpenAI API 兼容
|
||||
- 优先级自动切换
|
||||
- 流式响应支持
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
Reference in New Issue
Block a user