feat(admin): Add user management and upgrade to module permission system

Features - User Management (Phase 4.1):
- Database: Add user_modules table for fine-grained module permissions
- Database: Add 4 user permissions (view/create/edit/delete) to role_permissions
- Backend: UserService (780 lines) - CRUD with tenant isolation
- Backend: UserController + UserRoutes (648 lines) - 13 API endpoints
- Backend: Batch import users from Excel
- Frontend: UserListPage (412 lines) - list/filter/search/pagination
- Frontend: UserFormPage (341 lines) - create/edit with module config
- Frontend: UserDetailPage (393 lines) - details/tenant/module management
- Frontend: 3 modal components (592 lines) - import/assign/configure
- API: GET/POST/PUT/DELETE /api/admin/users/* endpoints

Architecture Upgrade - Module Permission System:
- Backend: Add getUserModules() method in auth.service
- Backend: Login API returns modules array in user object
- Frontend: AuthContext adds hasModule() method
- Frontend: Navigation filters modules based on user.modules
- Frontend: RouteGuard checks requiredModule instead of requiredVersion
- Frontend: Remove deprecated version-based permission system
- UX: Only show accessible modules in navigation (clean UI)
- UX: Smart redirect after login (avoid 403 for regular users)

Fixes:
- Fix UTF-8 encoding corruption in ~100 docs files
- Fix pageSize type conversion in userService (String to Number)
- Fix authUser undefined error in TopNavigation
- Fix login redirect logic with role-based access check
- Update Git commit guidelines v1.2 with UTF-8 safety rules

Database Changes:
- CREATE TABLE user_modules (user_id, tenant_id, module_code, is_enabled)
- ADD UNIQUE CONSTRAINT (user_id, tenant_id, module_code)
- INSERT 4 permissions + role assignments
- UPDATE PUBLIC tenant with 8 module subscriptions

Technical:
- Backend: 5 new files (~2400 lines)
- Frontend: 10 new files (~2500 lines)
- Docs: 1 development record + 2 status updates + 1 guideline update
- Total: ~4900 lines of code

Status: User management 100% complete, module permission system operational
This commit is contained in:
2026-01-16 13:42:10 +08:00
parent 98d862dbd4
commit 66255368b7
560 changed files with 70424 additions and 52353 deletions

View File

@@ -1,25 +1,25 @@
# CloseAI集成指南
> **<EFBFBD>。」迚域悽<EFBFBD>?* v1.0
> **蛻帛サコ譌・譛滂シ?* 2025-11-09
> **文档版本:** v1.0
> **创建日期:** 2025-11-09
> **用途:** 通过CloseAI代理平台访问OpenAI GPT-5和Claude-4.5
> **騾ら畑蝨コ譎ッ<EFBFBD>?* AI譎コ閭ス譁<EFBDBD>鍵蜿梧ィ。蝙狗ュ幃€€<E5B3A8>ォ倩エィ驥乗枚譛ャ逕滓<E98095>
> **适用场景:** AI智能文献双模型筛选、高质量文本生成
---
## <EFBFBD> CloseAI邂€莉?
## 📋 CloseAI简介
### 莉€荵域弍CloseAI<EFBFBD>?
### 什么是CloseAI
CloseAI譏ッ荳€荳?*API莉」逅<EFBDA3>ケウ蜿ー**<2A>御クコ荳ュ蝗ス逕ィ謌キ謠蝉セ帷ィウ螳夂噪OpenAI蜥靴laude API隶ソ髣ョ譛榊苅縲?
CloseAI是一个**API代理平台**,为中国用户提供稳定的OpenAI和Claude API访问服务。
**譬ク蠢<EFBFBD>シ伜漢<EFBFBD>?*
- 笨?蝗ス蜀<EFBDBD>峩霑橸シ梧裏髴€遘大ュヲ荳顔ス<E9A194>
- 笨?荳€荳ェAPI Key蜷梧慮隹<EFBFBD>OpenAI蜥靴laude
- 笨?蜈シ螳ケOpenAI SDK<EFBFBD>㊥謗・蜿」
- 笨?謾ッ謖∵怙譁ー讓。蝙具シ<E585B7>PT-5縲,laude-4.5<EFBFBD>?
**核心优势:**
- ✅ 国内直连,无需科学上网
- ✅ 一个API Key同时调用OpenAI和Claude
- ✅ 兼容OpenAI SDK标准接口
- ✅ 支持最新模型GPT-5、Claude-4.5
**螳倡ス托シ?* https://platform.openai-proxy.org
**官网:** https://platform.openai-proxy.org
---
@@ -38,15 +38,15 @@ CLOSEAI_OPENAI_BASE_URL=https://api.openai-proxy.org/v1
CLOSEAI_CLAUDE_BASE_URL=https://api.openai-proxy.org/anthropic
```
### 謾ッ謖∫噪讓。蝙?
### 支持的模型
| 模型 | Model ID | 说明 | 适用场景 |
|------|---------|------|---------|
| **GPT-5-Pro** | `gpt-5-pro` | 譛€譁ーGPT-5 箝?| 譁<>鍵邊セ蜃<EFBDBE>ュ幃€€∝、肴揩謗ィ逅?|
| GPT-4-Turbo | `gpt-4-turbo-preview` | GPT-4鬮俶€ァ閭ス迚?| 雍ィ驥剰ヲ∵アるォ倡噪莉サ蜉。 |
| GPT-3.5-Turbo | `gpt-3.5-turbo` | 蠢ォ騾溽サ乗オ守沿 | 邂€蜊穂ササ蜉。縲∵<E7B8B2>譛ャ莨伜<E88EA8>?|
| **Claude-4.5-Sonnet** | `claude-sonnet-4-5-20250929` | 譛€譁ーClaude 箝?| 隨ャ荳画婿莉イ陬√€∫サ捺桷蛹冶セ灘<EFBDBE> |
| Claude-3.5-Sonnet | `claude-3-5-sonnet-20241022` | Claude-3.5遞ウ螳夂<EFBFBD>?| 鬮倩エィ驥乗枚譛ャ逕滓<E98095>?|
| **GPT-5-Pro** | `gpt-5-pro` | 最新GPT-5 ⭐ | 文献精准筛选、复杂推理 |
| GPT-4-Turbo | `gpt-4-turbo-preview` | GPT-4高性能版 | 质量要求高的任务 |
| GPT-3.5-Turbo | `gpt-3.5-turbo` | 快速经济版 | 简单任务、成本优化 |
| **Claude-4.5-Sonnet** | `claude-sonnet-4-5-20250929` | 最新Claude ⭐ | 第三方仲裁、结构化输出 |
| Claude-3.5-Sonnet | `claude-3-5-sonnet-20241022` | Claude-3.5稳定版 | 高质量文本生成 |
---
@@ -58,9 +58,9 @@ CLOSEAI_CLAUDE_BASE_URL=https://api.openai-proxy.org/anthropic
npm install openai
```
### 2. 蛻帛サコLLM譛榊苅邀?
### 2. 创建LLM服务类
**<EFBFBD>サカ菴咲スョ<EFBFBD>?* `backend/src/common/llm/closeai.service.ts`
**文件位置:** `backend/src/common/llm/closeai.service.ts`
```typescript
import OpenAI from 'openai';
@@ -71,13 +71,13 @@ export class CloseAIService {
private claudeClient: OpenAI;
constructor() {
// OpenAI螳「謌キ遶ッ<EFBFBD>€夊ソ④loseAI<EFBFBD>?
// OpenAI客户端通过CloseAI
this.openaiClient = new OpenAI({
apiKey: config.closeaiApiKey,
baseURL: config.closeaiOpenaiBaseUrl,
});
// Claude螳「謌キ遶ッ<EFBFBD>€夊ソ④loseAI<EFBFBD>?
// Claude客户端通过CloseAI
this.claudeClient = new OpenAI({
apiKey: config.closeaiApiKey,
baseURL: config.closeaiClaudeBaseUrl,
@@ -135,7 +135,7 @@ export class CloseAIService {
}
/**
* 豬∝シ丞桃蠎費シ<EFBFBD>PT-5<EFBFBD>?
* 流式响应GPT-5
*/
async *streamGPT5(prompt: string, systemPrompt?: string) {
const messages: any[] = [];
@@ -165,7 +165,7 @@ export class CloseAIService {
### 3. 统一LLM服务含4个模型
**<EFBFBD>サカ菴咲スョ<EFBFBD>?* `backend/src/common/llm/llm.service.ts`
**文件位置:** `backend/src/common/llm/llm.service.ts`
```typescript
import OpenAI from 'openai';
@@ -258,7 +258,7 @@ export class UnifiedLLMService {
### 场景1双模型对比筛选推荐
**遲也払<EFBFBD>?* DeepSeek<EFBFBD>亥ソォ騾溷<EFBFBD>遲幢シ<EFBFBD> + GPT-5<>郁エィ驥丞、肴<EFBDA4><EFBFBD><EFBDB8>
**策略:** DeepSeek(快速初筛) + GPT-5质量复核
```typescript
export class LiteratureScreeningService {
@@ -269,23 +269,23 @@ export class LiteratureScreeningService {
}
/**
* 蜿梧ィ。蝙区枚迪ョ遲幃€?
* 双模型文献筛选
*/
async screenLiterature(title: string, abstract: string, picoConfig: any) {
const prompt = `
请根据以下PICO标准判断这篇文献是否应该纳入
**PICO<EFBFBD><EFBFBD>?*
**PICO标准:**
- Population: ${picoConfig.population}
- Intervention: ${picoConfig.intervention}
- Comparison: ${picoConfig.comparison}
- Outcome: ${picoConfig.outcome}
**<EFBFBD>鍵菫。諱ッ<EFBFBD>?*
<EFBFBD>「假シ?{title}
鞫倩ヲ<EFBFBD>シ?{abstract}
**文献信息:**
标题:${title}
摘要:${abstract}
隸キ霎灘<EFBFBD>JSON譬シ蠑擾シ?
请输出JSON格式
{
"decision": "include/exclude/uncertain",
"reason": "判断理由",
@@ -325,11 +325,11 @@ export class LiteratureScreeningService {
### 场景2三模型共识仲裁
**遲也払<EFBFBD>?* 蠖謎ク、荳ェ讓。蝙句<E89D99>遯∵慮<E288B5>悟星逕ィClaude菴應クコ隨ャ荳画婿莉イ陬?
**策略:** 当两个模型冲突时启用Claude作为第三方仲裁
```typescript
async screenWithArbitration(title: string, abstract: string, picoConfig: any) {
// 隨ャ荳€霓ョ<EFBFBD>壼曙讓。蝙狗ュ幃€?
// 第一轮:双模型筛选
const initialScreen = await this.screenLiterature(title, abstract, picoConfig);
// 如果一致,直接返回
@@ -343,7 +343,7 @@ async screenWithArbitration(title: string, abstract: string, picoConfig: any) {
const claudeResult = await this.llm.chat('claude', prompt);
const claudeDecision = JSON.parse(claudeResult.content);
// 荳画ィ。蝙区兜逾?
// 三模型投票
const decisions = [
initialScreen.models[0].decision,
initialScreen.models[1].decision,
@@ -356,7 +356,7 @@ async screenWithArbitration(title: string, abstract: string, picoConfig: any) {
uncertain: decisions.filter(d => d === 'uncertain').length,
};
// 螟壽焚蜀?
// 多数决
const finalDecision = Object.keys(voteCount).reduce((a, b) =>
voteCount[a] > voteCount[b] ? a : b
);
@@ -370,13 +370,13 @@ async screenWithArbitration(title: string, abstract: string, picoConfig: any) {
}
```
### 蝨コ譎ッ3<EFBFBD><EFBFBD>譛ャ莨伜喧遲也<EFBFBD>?
### 场景3成本优化策略
**遲也払<EFBFBD>?* 蜿ェ蟇ケ荳咲。ョ螳夂噪扈捺棡菴ソ逕ィGPT-5螟肴<EFBFBD>
**策略:** 只对不确定的结果使用GPT-5复核
```typescript
async screenWithCostOptimization(title: string, abstract: string, picoConfig: any) {
// 隨ャ荳€霓ョ<EFBFBD>夂畑DeepSeek蠢ォ騾溷<EFBFBD>遲幢シ井セソ螳懶シ?
// 第一轮用DeepSeek快速初筛便宜
const quickScreen = await this.llm.chat('deepseek', prompt);
const quickDecision = JSON.parse(quickScreen.content);
@@ -405,39 +405,39 @@ async screenWithCostOptimization(title: string, abstract: string, picoConfig: an
---
## <EFBFBD>投 諤ァ閭ス蜥梧<E89CA5>譛ャ蟇ケ豈?
## 📊 性能和成本对比
### 模型性能对比
| 指标 | DeepSeek-V3 | GPT-5-Pro | Claude-4.5 | Qwen-Max |
|------|------------|-----------|-----------|----------|
| **<EFBFBD>。ョ邇?* | 85% | **95%** 箝?| 93% | 82% |
| **騾溷コヲ** | **蠢?* 箝?| 荳ュ遲<EFBDAD> | 荳ュ遲<EFBDAD> | 蠢?|
| **謌先悽** | **ツ・0.001/1K** 箝?| ツ・0.10/1K | ツ・0.021/1K | ツ・0.004/1K |
| **荳ュ譁<EFBFBD>炊隗」** | **莨倡ァ€** 箝?| 莨倡ァ€ | 濶ッ螂ス | 莨倡ァ€ |
| **扈捺桷蛹冶セ灘<EFBFBD>?* | 濶ッ螂ス | 莨倡ァ€ | **莨倡ァ€** 箝?| 濶ッ螂ス |
| **准确率** | 85% | **95%** | 93% | 82% |
| **速度** | **** ⭐ | 中等 | 中等 | |
| **成本** | **¥0.001/1K** ⭐ | ¥0.10/1K | ¥0.021/1K | ¥0.004/1K |
| **中文理解** | **优秀** ⭐ | 优秀 | 良好 | 优秀 |
| **结构化输出** | 良好 | 优秀 | **优秀** ⭐ | 良好 |
### 遲幃€?000遽<30>枚迪ョ逧<EFBDAE><E980A7>譛ャ莨ー邂<EFBDB0>
### 筛选1000篇文献的成本估算
**策略A只用DeepSeek**
- 謌先悽<EFBFBD>堋?0-30
- 成本¥20-30
- 准确率85%
- 騾ら畑<EFBFBD>夐「<EFBFBD>ョ玲怏髯撰シ悟庄謗・蜿嶺ク€螳夊ッッ蟾?
- 适用:预算有限,可接受一定误差
**遲也払B<EFBFBD>eepSeek + GPT-5 蜿梧ィ。蝙?*
- 謌先悽<EFBFBD>堋?50-200
**策略BDeepSeek + GPT-5 双模型**
- 成本¥150-200
- 准确率92%
- 騾ら畑<EFBFBD>夊エィ驥剰ヲ∵アるォ假シ碁「<EFBFBD>ョ怜<EFBFBD>雜?箝?謗ィ闕<EFBDA8>
- 适用:质量要求高,预算充足 ⭐ 推荐
**遲也払C<EFBFBD>壻ク画ィ。蝙句<EFBFBD><EFBFBD>シ?0%蜀イ遯∝星逕ィClaude<64>?*
- 謌先悽<EFBFBD>堋?80-220
**策略C三模型共识20%冲突启用Claude**
- 成本¥180-220
- 准确率95%
- 騾ら畑<EFBFBD>壽怙鬮倩エィ驥剰ヲ∵ア?
- 适用:最高质量要求
**遲也払D<EFBFBD><EFBFBD>譛ャ莨伜喧<EFBFBD><EFBFBD>80%逕ィDeepSeek<EFBFBD>?0%逕ィGPT-5<>?*
- 謌先悽<EFBFBD>堋?0-80
**策略D成本优化80%DeepSeek20%用GPT-5**
- 成本¥50-80
- 准确率90%
- 騾ら畑<EFBFBD>夊エィ驥丞柱謌先悽蟷ウ陦。 箝?諤ァ莉キ豈疲怙鬮?
- 适用:质量和成本平衡 ⭐ 性价比最高
---
@@ -446,12 +446,12 @@ async screenWithCostOptimization(title: string, abstract: string, picoConfig: an
### 1. API Key安全
```typescript
// 笶?髞呵ッッ<EFBDAF>夂。ャ郛也<E9839B>PI Key
// ❌ 错误硬编码API Key
const client = new OpenAI({
apiKey: 'sk-cu0iepbXYGGx2jc7BqP6ogtSWmP6fk918qV3RUdtGC3Edlpo',
});
// 笨?豁」遑ョ<E98191>壻サ守識蠅<E8AD98>序驥剰ッサ蜿<EFBDBB>
// ✅ 正确:从环境变量读取
const client = new OpenAI({
apiKey: process.env.CLOSEAI_API_KEY,
});
@@ -465,15 +465,15 @@ async chat(provider: LLMProvider, prompt: string) {
const response = await this.llm.chat(provider, prompt);
return response;
} catch (error) {
// CloseAI蜿ッ閭ス霑泌屓逧<EFBFBD>漠隸?
// CloseAI可能返回的错误
if (error.status === 429) {
// 速率限制
console.error('API调用速率超限请稍后重试');
} else if (error.status === 401) {
// 认证失败
console.error('API Key<EFBFBD>謨茨シ瑚ッキ譽€譟・驟咲ス?);
console.error('API Key无效,请检查配置');
} else if (error.status === 500) {
// 譛榊苅遶ッ髞呵ッ?
// 服务端错误
console.error('CloseAI服务异常请稍后重试');
}
throw error;
@@ -491,7 +491,7 @@ async chatWithRetry(provider: LLMProvider, prompt: string, maxRetries = 3) {
} catch (error) {
if (i === maxRetries - 1) throw error;
// <EFBFBD>焚騾€驕?
// 指数退避
const delay = Math.pow(2, i) * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
}
@@ -509,9 +509,9 @@ async chatWithRetry(provider: LLMProvider, prompt: string, maxRetries = 3) {
---
**譖エ譁ー譌・蠢暦シ?*
**更新日志:**
- 2025-11-09: 创建文档添加CloseAI集成指南
- 謾ッ謖;PT-5-Pro蜥靴laude-4.5-Sonnet譛€譁ー讓。蝙?
- 支持GPT-5-Pro和Claude-4.5-Sonnet最新模型