Sprint 1-3 Completed (Backend + Frontend): Backend (Sprint 1-2): - Implement 5-layer Agent framework (Query->Planner->Executor->Tools->Reflection) - Create agent_schema with 6 tables (agent_definitions, stages, prompts, sessions, traces, reflexion_rules) - Create protocol_schema with 2 tables (protocol_contexts, protocol_generations) - Implement Protocol Agent core services (Orchestrator, ContextService, PromptBuilder) - Integrate LLM service adapter (DeepSeek/Qwen/GPT-5/Claude) - 6 API endpoints with full authentication - 10/10 API tests passed Frontend (Sprint 3): - Add Protocol Agent entry in AgentHub (indigo theme card) - Implement ProtocolAgentPage with 3-column layout - Collapsible sidebar (Gemini style, 48px <-> 280px) - StatePanel with 5 stage cards (scientific_question, pico, study_design, sample_size, endpoints) - ChatArea with sync button and action cards integration - 100% prototype design restoration (608 lines CSS) - Detailed endpoints structure: baseline, exposure, outcomes, confounders Features: - 5-stage dialogue flow for research protocol design - Conversation-driven interaction with sync-to-protocol button - Real-time context state management - One-click protocol generation button (UI ready, backend pending) Database: - agent_schema: 6 tables for reusable Agent framework - protocol_schema: 2 tables for Protocol Agent - Seed data: 1 agent + 5 stages + 9 prompts + 4 reflexion rules Code Stats: - Backend: 13 files, 4338 lines - Frontend: 14 files, 2071 lines - Total: 27 files, 6409 lines Status: MVP core functionality completed, pending frontend-backend integration testing Next: Sprint 4 - One-click protocol generation + Word export
176 lines
3.4 KiB
Python
176 lines
3.4 KiB
Python
"""
|
||
高级筛选操作
|
||
|
||
提供多条件筛选功能,支持AND/OR逻辑组合。
|
||
"""
|
||
|
||
import pandas as pd
|
||
from typing import List, Dict, Any, Literal
|
||
|
||
|
||
def apply_filter(
|
||
df: pd.DataFrame,
|
||
conditions: List[Dict[str, Any]],
|
||
logic: Literal['and', 'or'] = 'and'
|
||
) -> pd.DataFrame:
|
||
"""
|
||
应用筛选条件
|
||
|
||
Args:
|
||
df: 输入数据框
|
||
conditions: 筛选条件列表,每个条件包含:
|
||
- column: 列名
|
||
- operator: 运算符 (=, !=, >, <, >=, <=, contains, not_contains,
|
||
starts_with, ends_with, is_null, not_null)
|
||
- value: 值(is_null和not_null不需要)
|
||
logic: 逻辑组合方式 ('and' 或 'or')
|
||
|
||
Returns:
|
||
筛选后的数据框
|
||
|
||
Examples:
|
||
>>> df = pd.DataFrame({'年龄': [25, 35, 45], '性别': ['男', '女', '男']})
|
||
>>> conditions = [
|
||
... {'column': '年龄', 'operator': '>', 'value': 30},
|
||
... {'column': '性别', 'operator': '=', 'value': '男'}
|
||
... ]
|
||
>>> result = apply_filter(df, conditions, logic='and')
|
||
>>> len(result)
|
||
1
|
||
"""
|
||
if not conditions:
|
||
raise ValueError('筛选条件不能为空')
|
||
|
||
if df.empty:
|
||
return df
|
||
|
||
# 生成各个条件的mask
|
||
masks = []
|
||
for cond in conditions:
|
||
column = cond['column']
|
||
operator = cond['operator']
|
||
value = cond.get('value')
|
||
|
||
# 验证列是否存在
|
||
if column not in df.columns:
|
||
raise KeyError(f"列 '{column}' 不存在")
|
||
|
||
# 根据运算符生成mask
|
||
if operator == '=':
|
||
mask = df[column] == value
|
||
elif operator == '!=':
|
||
mask = df[column] != value
|
||
elif operator == '>':
|
||
mask = df[column] > value
|
||
elif operator == '<':
|
||
mask = df[column] < value
|
||
elif operator == '>=':
|
||
mask = df[column] >= value
|
||
elif operator == '<=':
|
||
mask = df[column] <= value
|
||
elif operator == 'contains':
|
||
mask = df[column].astype(str).str.contains(str(value), na=False)
|
||
elif operator == 'not_contains':
|
||
mask = ~df[column].astype(str).str.contains(str(value), na=False)
|
||
elif operator == 'starts_with':
|
||
mask = df[column].astype(str).str.startswith(str(value), na=False)
|
||
elif operator == 'ends_with':
|
||
mask = df[column].astype(str).str.endswith(str(value), na=False)
|
||
elif operator == 'is_null':
|
||
mask = df[column].isna()
|
||
elif operator == 'not_null':
|
||
mask = df[column].notna()
|
||
else:
|
||
raise ValueError(f"不支持的运算符: {operator}")
|
||
|
||
masks.append(mask)
|
||
|
||
# 组合所有条件
|
||
if logic == 'and':
|
||
final_mask = pd.concat(masks, axis=1).all(axis=1)
|
||
elif logic == 'or':
|
||
final_mask = pd.concat(masks, axis=1).any(axis=1)
|
||
else:
|
||
raise ValueError(f"不支持的逻辑运算: {logic}")
|
||
|
||
# 应用筛选
|
||
result = df[final_mask].copy()
|
||
|
||
# 打印统计信息
|
||
original_rows = len(df)
|
||
filtered_rows = len(result)
|
||
removed_rows = original_rows - filtered_rows
|
||
|
||
print(f'原始数据: {original_rows} 行')
|
||
print(f'筛选后: {filtered_rows} 行')
|
||
print(f'删除: {removed_rows} 行 ({removed_rows/original_rows*100:.1f}%)')
|
||
|
||
return result
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|