Paper Claim Audit performs zero-context verification that every numeric claim, comparison, and scope statement in a manuscript matches raw experiment outpu…
Paper Claim Audit performs zero-context verification that every numeric claim, comparison, and scope statement in a manuscript matches raw experiment outputs. The Skill ingests only paper source files (e.g.,.tex sections and tables) and raw result files (JSON/JSONL/CSV/TSV, wandb-summary.json, metrics/eval_results.json, config/args YAML or JSON) and explicitly excludes logs, summaries, or executor narratives. It programmatically extracts claimed numbers, computes metrics from raw files, checks rounding, seed-versus-average reporting, config mismatches, and reported deltas, and flags inconsistencies. Use cases: pre-submission checks, independent auditing requests (e.g., “check paper claims”, “论文数字核对”), and verification after experiments to prevent confirmation bias. Core advantages are strict zero-context independence, reproducible numeric cross-checks, clear discrepancy reports with source pointers, and actionable correction suggestions to ensure paper-to-evidence fidelity.
本頁屬於 OpenClaw Skills 學習體系,涵蓋技能安裝、分類導覽與實戰連結。