Files
AIclinicalresearch/python-microservice/operations/filter.py
HaHafeng fa72beea6c feat(platform): Complete Postgres-Only architecture refactoring (Phase 1-7)
Major Changes:
- Implement Platform-Only architecture pattern (unified task management)
- Add PostgresCacheAdapter for unified caching (platform_schema.app_cache)
- Add PgBossQueue for job queue management (platform_schema.job)
- Implement CheckpointService using job.data (generic for all modules)
- Add intelligent threshold-based dual-mode processing (THRESHOLD=50)
- Add task splitting mechanism (auto chunk size recommendation)
- Refactor ASL screening service with smart mode selection
- Refactor DC extraction service with smart mode selection
- Register workers for ASL and DC modules

Technical Highlights:
- All task management data stored in platform_schema.job.data (JSONB)
- Business tables remain clean (no task management fields)
- CheckpointService is generic (shared by all modules)
- Zero code duplication (DRY principle)
- Follows 3-layer architecture principle
- Zero additional cost (no Redis needed, save 8400 CNY/year)

Code Statistics:
- New code: ~1750 lines
- Modified code: ~500 lines
- Test code: ~1800 lines
- Documentation: ~3000 lines

Testing:
- Unit tests: 8/8 passed
- Integration tests: 2/2 passed
- Architecture validation: passed
- Linter errors: 0

Files:
- Platform layer: PostgresCacheAdapter, PgBossQueue, CheckpointService, utils
- ASL module: screeningService, screeningWorker
- DC module: ExtractionController, extractionWorker
- Tests: 11 test files
- Docs: Updated 4 key documents

Status: Phase 1-7 completed, Phase 8-9 pending
2025-12-13 16:10:04 +08:00

119 lines
3.4 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
"""
高级筛选操作
提供多条件筛选功能支持AND/OR逻辑组合。
"""
import pandas as pd
from typing import List, Dict, Any, Literal
def apply_filter(
df: pd.DataFrame,
conditions: List[Dict[str, Any]],
logic: Literal['and', 'or'] = 'and'
) -> pd.DataFrame:
"""
应用筛选条件
Args:
df: 输入数据框
conditions: 筛选条件列表,每个条件包含:
- column: 列名
- operator: 运算符 (=, !=, >, <, >=, <=, contains, not_contains,
starts_with, ends_with, is_null, not_null)
- value: 值is_null和not_null不需要
logic: 逻辑组合方式 ('and''or')
Returns:
筛选后的数据框
Examples:
>>> df = pd.DataFrame({'年龄': [25, 35, 45], '性别': ['', '', '']})
>>> conditions = [
... {'column': '年龄', 'operator': '>', 'value': 30},
... {'column': '性别', 'operator': '=', 'value': ''}
... ]
>>> result = apply_filter(df, conditions, logic='and')
>>> len(result)
1
"""
if not conditions:
raise ValueError('筛选条件不能为空')
if df.empty:
return df
# 生成各个条件的mask
masks = []
for cond in conditions:
column = cond['column']
operator = cond['operator']
value = cond.get('value')
# 验证列是否存在
if column not in df.columns:
raise KeyError(f"'{column}' 不存在")
# 根据运算符生成mask
if operator == '=':
mask = df[column] == value
elif operator == '!=':
mask = df[column] != value
elif operator == '>':
mask = df[column] > value
elif operator == '<':
mask = df[column] < value
elif operator == '>=':
mask = df[column] >= value
elif operator == '<=':
mask = df[column] <= value
elif operator == 'contains':
mask = df[column].astype(str).str.contains(str(value), na=False)
elif operator == 'not_contains':
mask = ~df[column].astype(str).str.contains(str(value), na=False)
elif operator == 'starts_with':
mask = df[column].astype(str).str.startswith(str(value), na=False)
elif operator == 'ends_with':
mask = df[column].astype(str).str.endswith(str(value), na=False)
elif operator == 'is_null':
mask = df[column].isna()
elif operator == 'not_null':
mask = df[column].notna()
else:
raise ValueError(f"不支持的运算符: {operator}")
masks.append(mask)
# 组合所有条件
if logic == 'and':
final_mask = pd.concat(masks, axis=1).all(axis=1)
elif logic == 'or':
final_mask = pd.concat(masks, axis=1).any(axis=1)
else:
raise ValueError(f"不支持的逻辑运算: {logic}")
# 应用筛选
result = df[final_mask].copy()
# 打印统计信息
original_rows = len(df)
filtered_rows = len(result)
removed_rows = original_rows - filtered_rows
print(f'原始数据: {original_rows}')
print(f'筛选后: {filtered_rows}')
print(f'删除: {removed_rows} 行 ({removed_rows/original_rows*100:.1f}%)')
return result