Three-layer module: find_neuropose_processes() scans the process
table via psutil for running watch/serve instances; terminate_processes()
SIGINTs with a configurable grace period before optional SIGKILL
escalation; wipe_state() clears $data_dir/in/, out/, failed/,
the .neuropose.lock file, and leftover .ingest_<uuid>/ staging dirs
while preserving the container directories themselves. reset_pipeline()
composes the three and refuses to wipe while any process survives
termination.
CLI wraps it with --yes/-y, --keep-failed, --force-kill,
--grace-seconds, and --dry-run/-n. Always prints a preview before
prompting; returns EXIT_USAGE=2 when survivors block the wipe.
Unblocks the Mac benchmark iteration loop where partially-complete
runs need to be cleared between experiments.
One shared CURRENT_VERSION across the three top-level serialised
payloads (VideoPredictions, JobResults, BenchmarkResult), with
per-schema registries populated via register_*_migration(from_version)
decorators. FutureSchemaError and MigrationNotFoundError surface bad
chains clearly. CURRENT_VERSION=2 with v1→v2 migrations registered
that add an optional provenance field to the payload dicts.
Tested standalone; io.py is wired through the migrator in a follow-up
commit that introduces the Provenance schema those migrations target.