Commit Graph

6 Commits

Author SHA1 Message Date
AI Clinical Dev Team
07704817ba debug: add detailed logging for userPromptContent 2025-10-11 21:42:38 +08:00
AI Clinical Dev Team
77249ad6f5 debug: add detailed logging for knowledge base context injection 2025-10-11 21:37:54 +08:00
AI Clinical Dev Team
60014b57b5 feat: complete Day 21-24 knowledge base features
- Day 21-22: fix CORS and file upload issues
- Day 23-24: implement @knowledge base search and conversation integration
- Backend: integrate Dify RAG into conversation system
- Frontend: load knowledge base list and @ reference feature
2025-10-11 17:08:12 +08:00
AI Clinical Dev Team
8b07a3f822 fix: increase conversation history from 10 to 100 messages
Previous limit was too conservative:
- Old: 10 messages (5 conversation turns) 鉂?Too limited
- New: 100 messages (50 conversation turns) 鉁?Reasonable

Context capacity comparison:
- DeepSeek-V3: 64K tokens 鈮?100-200 turns
- Qwen3-72B: 128K tokens 鈮?200-400 turns
- Previous 10 messages was only using ~1% of capacity

Real usage scenarios:
- Quick consultation: 5-10 turns
- In-depth discussion: 20-50 turns 鉁?Common
- Complete research design: 50-100 turns

The new 100-message limit covers 99% of real use cases while
staying well within model token limits.
2025-10-10 22:16:30 +08:00
AI Clinical Dev Team
529cfe20da fix: prevent project background from being sent on every message
Critical bug fix: Project background was being sent with EVERY message,
causing AI to respond to background instead of follow-up questions.

Problem:
- First message: User asks A, AI answers A 鉁?- Second message: User asks B, but prompt includes background again
- AI responds to background content, ignores question B 鉂?
Solution:
- Only send full prompt template (with project background) on FIRST message
- For follow-up messages, send ONLY user input (+ knowledge base if exists)
- Maintain conversation history properly

Updated: conversationService.assembleContext()
- Check if historyMessages.length === 0 (first message)
- First message: use renderUserPrompt() with all variables
- Follow-up: send userInput directly (optionally with knowledge base)

This ensures multi-turn conversations work correctly.
2025-10-10 22:07:53 +08:00
AI Clinical Dev Team
8afff23995 docs: Day 12-13 completion summary and milestone update 2025-10-10 20:33:18 +08:00