feat(iit): Complete CRA Agent V3.0 P1 - ChatOrchestrator with LLM Function Calling

P1 Architecture: Lightweight ReAct (Function Calling loop, max 3 rounds)

Core changes:
- Add ToolDefinition/ToolCall types to LLM adapters (DeepSeek + CloseAI + Claude)
- Replace 6 old tools with 4 semantic tools: read_report, look_up_data, check_quality, search_knowledge
- Create ChatOrchestrator (~160 lines) replacing ChatService (1,442 lines)
- Wire WechatCallbackController to ChatOrchestrator, deprecate ChatService
- Fix nullable content (string | null) across 12+ LLM consumer files

E2E test results: 8/8 scenarios passed (100%)
- QC report query, critical issues, patient data, trend, on-demand QC
- Knowledge base search, project overview, data modification refusal

Net code reduction: ~1,100 lines
Tested: E2E P1 chat test 8/8 passed with DeepSeek API

Made-with: Cursor
This commit is contained in:
2026-02-26 14:27:09 +08:00
parent 203846968c
commit 7c3cc12b2e
32 changed files with 903 additions and 337 deletions

View File

@@ -376,7 +376,7 @@ export class LLM12FieldsService {
}
);
return response.content;
return response.content ?? '';
} catch (error) {
lastError = error as Error;
logger.error(`LLM call attempt ${attempt + 1} failed: ${(error as Error).message}`);

View File

@@ -156,7 +156,7 @@ class ExtractionSingleWorkerImpl {
];
const response = await llm.chat(messages, { temperature: 0.1 });
const content = response.content.trim();
const content = (response.content ?? '').trim();
const match = content.match(/\{[\s\S]*\}/);
if (!match) {

View File

@@ -71,7 +71,7 @@ export class LLMScreeningService {
]);
// 解析JSON输出
const parseResult = parseJSON(response.content);
const parseResult = parseJSON(response.content ?? '');
if (!parseResult.success || !parseResult.data) {
logger.error('Failed to parse LLM output as JSON', {
error: parseResult.error,

View File

@@ -91,7 +91,7 @@ class RequirementExpansionService {
maxTokens: rendered.modelConfig.maxTokens ?? 4096,
});
const rawOutput = llmResponse.content;
const rawOutput = llmResponse.content ?? '';
const { requirement, intentSummary } = this.parseOutput(rawOutput);