Files
AIclinicalresearch/docs/02-通用能力层/01-LLM大模型网关/03-CloseAI集成指南.md
HaHafeng 31d555f7bb docs: Update architecture docs with platform infrastructure details
- Add platform infrastructure chapter to frontend-backend architecture design
- Update system architecture document with 6 new infrastructure modules
- Update AI onboarding guide with infrastructure overview
- Link to backend/src/common/README.md for detailed usage guide

Key Updates:
- Storage service (LocalAdapter + OSSAdapter)
- Logging system (Winston + JSON format)
- Cache service (Memory + Redis)
- Async job queue (Memory + Database)
- Health check endpoints
- Monitoring metrics
- Database connection pool
- Environment config management

All modules support zero-code switching between local and cloud environments.

Related: #Platform-Infrastructure
2025-11-17 08:36:10 +08:00

526 lines
13 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# CloseAI集成指南
> **文档版本:** v1.0
> **创建日期:** 2025-11-09
> **用途:** 通过CloseAI代理平台访问OpenAI GPT-5和Claude-4.5
> **适用场景:** AI智能文献双模型筛选、高质量文本生成
---
## 📋 CloseAI简介
### 什么是CloseAI
CloseAI是一个**API代理平台**为中国用户提供稳定的OpenAI和Claude API访问服务。
**核心优势:**
- ✅ 国内直连,无需科学上网
- ✅ 一个API Key同时调用OpenAI和Claude
- ✅ 兼容OpenAI SDK标准接口
- ✅ 支持最新模型GPT-5、Claude-4.5
**官网:** https://platform.openai-proxy.org
---
## 🔧 配置信息
### 环境变量配置
```env
# CloseAI统一API Key
CLOSEAI_API_KEY=sk-cu0iepbXYGGx2jc7BqP6ogtSWmP6fk918qV3RUdtGC3Edlpo
# OpenAI端点
CLOSEAI_OPENAI_BASE_URL=https://api.openai-proxy.org/v1
# Claude端点
CLOSEAI_CLAUDE_BASE_URL=https://api.openai-proxy.org/anthropic
```
### 支持的模型
| 模型 | Model ID | 说明 | 适用场景 |
|------|---------|------|---------|
| **GPT-5-Pro** | `gpt-5-pro` | 最新GPT-5 ⭐ | 文献精准筛选、复杂推理 |
| GPT-4-Turbo | `gpt-4-turbo-preview` | GPT-4高性能版 | 质量要求高的任务 |
| GPT-3.5-Turbo | `gpt-3.5-turbo` | 快速经济版 | 简单任务、成本优化 |
| **Claude-4.5-Sonnet** | `claude-sonnet-4-5-20250929` | 最新Claude ⭐ | 第三方仲裁、结构化输出 |
| Claude-3.5-Sonnet | `claude-3-5-sonnet-20241022` | Claude-3.5稳定版 | 高质量文本生成 |
---
## 💻 代码集成
### 1. 安装依赖
```bash
npm install openai
```
### 2. 创建LLM服务类
**文件位置:** `backend/src/common/llm/closeai.service.ts`
```typescript
import OpenAI from 'openai';
import { config } from '../../config/env';
export class CloseAIService {
private openaiClient: OpenAI;
private claudeClient: OpenAI;
constructor() {
// OpenAI客户端通过CloseAI
this.openaiClient = new OpenAI({
apiKey: config.closeaiApiKey,
baseURL: config.closeaiOpenaiBaseUrl,
});
// Claude客户端通过CloseAI
this.claudeClient = new OpenAI({
apiKey: config.closeaiApiKey,
baseURL: config.closeaiClaudeBaseUrl,
});
}
/**
* 调用GPT-5-Pro
*/
async chatWithGPT5(prompt: string, systemPrompt?: string) {
const messages: any[] = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
}
messages.push({ role: 'user', content: prompt });
const response = await this.openaiClient.chat.completions.create({
model: 'gpt-5-pro',
messages,
temperature: 0.3,
max_tokens: 2000,
});
return {
content: response.choices[0].message.content,
usage: response.usage,
model: 'gpt-5-pro',
};
}
/**
* 调用Claude-4.5-Sonnet
*/
async chatWithClaude(prompt: string, systemPrompt?: string) {
const messages: any[] = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
}
messages.push({ role: 'user', content: prompt });
const response = await this.claudeClient.chat.completions.create({
model: 'claude-sonnet-4-5-20250929',
messages,
temperature: 0.3,
max_tokens: 2000,
});
return {
content: response.choices[0].message.content,
usage: response.usage,
model: 'claude-sonnet-4-5-20250929',
};
}
/**
* 流式响应GPT-5
*/
async *streamGPT5(prompt: string, systemPrompt?: string) {
const messages: any[] = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
}
messages.push({ role: 'user', content: prompt });
const stream = await this.openaiClient.chat.completions.create({
model: 'gpt-5-pro',
messages,
temperature: 0.3,
max_tokens: 2000,
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
if (content) {
yield content;
}
}
}
}
```
### 3. 统一LLM服务含4个模型
**文件位置:** `backend/src/common/llm/llm.service.ts`
```typescript
import OpenAI from 'openai';
import { config } from '../../config/env';
export type LLMProvider = 'deepseek' | 'gpt5' | 'claude' | 'qwen';
export class UnifiedLLMService {
private deepseek: OpenAI;
private gpt5: OpenAI;
private claude: OpenAI;
private qwen: OpenAI;
constructor() {
// DeepSeek (直连)
this.deepseek = new OpenAI({
apiKey: config.deepseekApiKey,
baseURL: config.deepseekBaseUrl,
});
// GPT-5 (通过CloseAI)
this.gpt5 = new OpenAI({
apiKey: config.closeaiApiKey,
baseURL: config.closeaiOpenaiBaseUrl,
});
// Claude (通过CloseAI)
this.claude = new OpenAI({
apiKey: config.closeaiApiKey,
baseURL: config.closeaiClaudeBaseUrl,
});
// Qwen (备用)
this.qwen = new OpenAI({
apiKey: config.dashscopeApiKey,
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1',
});
}
/**
* 统一调用接口
*/
async chat(
provider: LLMProvider,
prompt: string,
options?: {
systemPrompt?: string;
temperature?: number;
maxTokens?: number;
}
) {
const { systemPrompt, temperature = 0.3, maxTokens = 2000 } = options || {};
const messages: any[] = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
}
messages.push({ role: 'user', content: prompt });
// 选择模型
const modelMap = {
deepseek: { client: this.deepseek, model: 'deepseek-chat' },
gpt5: { client: this.gpt5, model: 'gpt-5-pro' },
claude: { client: this.claude, model: 'claude-sonnet-4-5-20250929' },
qwen: { client: this.qwen, model: 'qwen-max' },
};
const { client, model } = modelMap[provider];
const response = await client.chat.completions.create({
model,
messages,
temperature,
max_tokens: maxTokens,
});
return {
content: response.choices[0].message.content || '',
usage: response.usage,
model,
provider,
};
}
}
```
---
## 🎯 AI智能文献应用场景
### 场景1双模型对比筛选推荐
**策略:** DeepSeek快速初筛 + GPT-5质量复核
```typescript
export class LiteratureScreeningService {
private llm: UnifiedLLMService;
constructor() {
this.llm = new UnifiedLLMService();
}
/**
* 双模型文献筛选
*/
async screenLiterature(title: string, abstract: string, picoConfig: any) {
const prompt = `
请根据以下PICO标准判断这篇文献是否应该纳入
**PICO标准**
- Population: ${picoConfig.population}
- Intervention: ${picoConfig.intervention}
- Comparison: ${picoConfig.comparison}
- Outcome: ${picoConfig.outcome}
**文献信息:**
标题:${title}
摘要:${abstract}
请输出JSON格式
{
"decision": "include/exclude/uncertain",
"reason": "判断理由",
"confidence": 0.0-1.0
}
`;
// 并行调用两个模型
const [deepseekResult, gpt5Result] = await Promise.all([
this.llm.chat('deepseek', prompt),
this.llm.chat('gpt5', prompt),
]);
// 解析结果
const deepseekDecision = JSON.parse(deepseekResult.content);
const gpt5Decision = JSON.parse(gpt5Result.content);
// 如果两个模型一致,直接采纳
if (deepseekDecision.decision === gpt5Decision.decision) {
return {
finalDecision: deepseekDecision.decision,
consensus: 'high',
models: [deepseekDecision, gpt5Decision],
};
}
// 如果不一致,返回双方意见,待人工复核
return {
finalDecision: 'uncertain',
consensus: 'low',
models: [deepseekDecision, gpt5Decision],
needManualReview: true,
};
}
}
```
### 场景2三模型共识仲裁
**策略:** 当两个模型冲突时启用Claude作为第三方仲裁
```typescript
async screenWithArbitration(title: string, abstract: string, picoConfig: any) {
// 第一轮:双模型筛选
const initialScreen = await this.screenLiterature(title, abstract, picoConfig);
// 如果一致,直接返回
if (initialScreen.consensus === 'high') {
return initialScreen;
}
// 如果不一致启用Claude仲裁
console.log('双模型结果不一致启用Claude仲裁...');
const claudeResult = await this.llm.chat('claude', prompt);
const claudeDecision = JSON.parse(claudeResult.content);
// 三模型投票
const decisions = [
initialScreen.models[0].decision,
initialScreen.models[1].decision,
claudeDecision.decision,
];
const voteCount = {
include: decisions.filter(d => d === 'include').length,
exclude: decisions.filter(d => d === 'exclude').length,
uncertain: decisions.filter(d => d === 'uncertain').length,
};
// 多数决
const finalDecision = Object.keys(voteCount).reduce((a, b) =>
voteCount[a] > voteCount[b] ? a : b
);
return {
finalDecision,
consensus: voteCount[finalDecision] >= 2 ? 'medium' : 'low',
models: [...initialScreen.models, claudeDecision],
arbitration: true,
};
}
```
### 场景3成本优化策略
**策略:** 只对不确定的结果使用GPT-5复核
```typescript
async screenWithCostOptimization(title: string, abstract: string, picoConfig: any) {
// 第一轮用DeepSeek快速初筛便宜
const quickScreen = await this.llm.chat('deepseek', prompt);
const quickDecision = JSON.parse(quickScreen.content);
// 如果结果明确include或exclude且置信度>0.8),直接采纳
if (quickDecision.confidence > 0.8 && quickDecision.decision !== 'uncertain') {
return {
finalDecision: quickDecision.decision,
consensus: 'high',
models: [quickDecision],
costOptimized: true,
};
}
// 否则用GPT-5复核
const detailedScreen = await this.llm.chat('gpt5', prompt);
const detailedDecision = JSON.parse(detailedScreen.content);
return {
finalDecision: detailedDecision.decision,
consensus: 'medium',
models: [quickDecision, detailedDecision],
costOptimized: true,
};
}
```
---
## 📊 性能和成本对比
### 模型性能对比
| 指标 | DeepSeek-V3 | GPT-5-Pro | Claude-4.5 | Qwen-Max |
|------|------------|-----------|-----------|----------|
| **准确率** | 85% | **95%** ⭐ | 93% | 82% |
| **速度** | **快** ⭐ | 中等 | 中等 | 快 |
| **成本** | **¥0.001/1K** ⭐ | ¥0.10/1K | ¥0.021/1K | ¥0.004/1K |
| **中文理解** | **优秀** ⭐ | 优秀 | 良好 | 优秀 |
| **结构化输出** | 良好 | 优秀 | **优秀** ⭐ | 良好 |
### 筛选1000篇文献的成本估算
**策略A只用DeepSeek**
- 成本¥20-30
- 准确率85%
- 适用:预算有限,可接受一定误差
**策略BDeepSeek + GPT-5 双模型**
- 成本¥150-200
- 准确率92%
- 适用:质量要求高,预算充足 ⭐ 推荐
**策略C三模型共识20%冲突启用Claude**
- 成本¥180-220
- 准确率95%
- 适用:最高质量要求
**策略D成本优化80%用DeepSeek20%用GPT-5**
- 成本¥50-80
- 准确率90%
- 适用:质量和成本平衡 ⭐ 性价比最高
---
## ⚠️ 注意事项
### 1. API Key安全
```typescript
// ❌ 错误硬编码API Key
const client = new OpenAI({
apiKey: 'sk-cu0iepbXYGGx2jc7BqP6ogtSWmP6fk918qV3RUdtGC3Edlpo',
});
// ✅ 正确:从环境变量读取
const client = new OpenAI({
apiKey: process.env.CLOSEAI_API_KEY,
});
```
### 2. 错误处理
```typescript
async chat(provider: LLMProvider, prompt: string) {
try {
const response = await this.llm.chat(provider, prompt);
return response;
} catch (error) {
// CloseAI可能返回的错误
if (error.status === 429) {
// 速率限制
console.error('API调用速率超限请稍后重试');
} else if (error.status === 401) {
// 认证失败
console.error('API Key无效请检查配置');
} else if (error.status === 500) {
// 服务端错误
console.error('CloseAI服务异常请稍后重试');
}
throw error;
}
}
```
### 3. 请求重试
```typescript
async chatWithRetry(provider: LLMProvider, prompt: string, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await this.llm.chat(provider, prompt);
} catch (error) {
if (i === maxRetries - 1) throw error;
// 指数退避
const delay = Math.pow(2, i) * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
```
---
## 📚 相关文档
- [环境配置指南](../../07-运维文档/01-环境配置指南.md#3-closeai配置代理openai和claude)
- [环境变量配置模板](../../07-运维文档/02-环境变量配置模板.md)
- [LLM网关快速上下文](./[AI对接]%20LLM网关快速上下文.md)
---
**更新日志:**
- 2025-11-09: 创建文档添加CloseAI集成指南
- 支持GPT-5-Pro和Claude-4.5-Sonnet最新模型