Go to file
Levi Neuwirth 979beb1078 add AnalysisConfig and AnalysisReport schemas
neuropose.analyzer.pipeline ships two top-level pydantic schemas:

AnalysisConfig — what a user writes in YAML. Inputs (primary plus
optional reference), preprocessing (person_index, room to grow),
optional segmentation as a discriminated union of gait_cycles,
gait_cycles_bilateral, and extractor, and a required analysis stage
as a discriminated union of dtw, stats, none.

AnalysisReport — runtime output with config, Provenance envelope,
per-input summaries, produced segmentations, and a results payload
whose shape mirrors the stage (DtwResults, StatsResults, NoResults).
schema_version defaults to CURRENT_VERSION.

Cross-field invariants enforced at parse time via model_validator:
method='dtw_relation' requires joint_i/joint_j and refuses
representation='angles'; representation='angles' requires non-empty
angle_triplets; analysis.kind='dtw' requires inputs.reference;
analysis.kind='stats' refuses a reference. Typos fail in
milliseconds instead of after a multi-minute predictions load.

neuropose.migrations gains a third registry for AnalysisReport
(_ANALYSIS_REPORT_MIGRATIONS + register_analysis_report_migration +
migrate_analysis_report), ready for future schema changes. No v1→v2
migration is registered because AnalysisReport first shipped at v2.

Execution, CLI wiring, and example configs land in follow-up commits.
2026-04-22 11:13:36 -04:00
.claude pin tf to ensure compatability with tensorflow-metal 2026-04-16 15:26:55 -04:00
.github/workflows deployment 2026-04-13 18:08:05 -04:00
benchmarks pin tf to ensure compatability with tensorflow-metal 2026-04-16 15:26:55 -04:00
docs add neuropose reset subcommand for pipeline-wide state wipe 2026-04-18 17:15:24 -04:00
scripts pin tf to ensure compatability with tensorflow-metal 2026-04-16 15:26:55 -04:00
src/neuropose add AnalysisConfig and AnalysisReport schemas 2026-04-22 11:13:36 -04:00
tests add AnalysisConfig and AnalysisReport schemas 2026-04-22 11:13:36 -04:00
.dockerignore pin tensorflow, comprehensiveness 2026-04-14 09:39:12 -04:00
.gitignore tooling 2026-04-15 11:41:27 -04:00
.pre-commit-config.yaml dev tooling 2026-04-13 11:59:18 -04:00
.python-version init neuropose 2026-04-13 11:55:14 -04:00
AUTHORS.md init neuropose 2026-04-13 11:55:14 -04:00
CHANGELOG.md add AnalysisConfig and AnalysisReport schemas 2026-04-22 11:13:36 -04:00
CITATION.cff init neuropose 2026-04-13 11:55:14 -04:00
Dockerfile pin tensorflow, comprehensiveness 2026-04-14 09:39:12 -04:00
LICENSE init neuropose 2026-04-13 11:55:14 -04:00
README.md pin tensorflow, comprehensiveness 2026-04-14 09:39:12 -04:00
RESEARCH.md pin tf to ensure compatability with tensorflow-metal 2026-04-16 15:26:55 -04:00
TECHNICAL.md add TECHNICAL.md engineering roadmap 2026-04-18 17:00:36 -04:00
mkdocs.yml add neuropose reset subcommand for pipeline-wide state wipe 2026-04-18 17:15:24 -04:00
pyproject.toml pin tf to ensure compatability with tensorflow-metal 2026-04-16 15:26:55 -04:00
uv.lock pin tf to ensure compatability with tensorflow-metal 2026-04-16 15:26:55 -04:00

README.md

NeuroPose (rewrite)

Ground-up rewrite of the prior NeuroPose internal prototype. The repository is private while the IRB data-handling policy is being authored; this README is aimed at contributors working on the rewrite, not external users.

Layout

neuropose/
├── .github/workflows/           # CI: ruff + pyright + pytest (ci.yml), mkdocs (docs.yml)
├── src/neuropose/
│   ├── __init__.py              # version only
│   ├── config.py                # pydantic-settings Settings class
│   ├── io.py                    # prediction schema, load/save helpers
│   ├── estimator.py             # per-video MeTRAbs worker
│   ├── interfacer.py            # filesystem-polling daemon
│   ├── visualize.py             # per-frame 2D + 3D overlay rendering
│   ├── cli.py                   # typer app (watch | process | analyze)
│   ├── _model.py                # MeTRAbs download + SHA-256 verify + load
│   └── analyzer/                # post-processing subpackage
│       ├── dtw.py               # FastDTW helpers
│       └── features.py          # normalization, padding, joint angles, stats
├── tests/
│   ├── conftest.py              # env isolation, synthetic video, fake model
│   ├── unit/                    # fast, no model download
│   └── integration/             # marked @slow, downloads the real MeTRAbs model
├── docs/                        # mkdocs-material site (mkdocs.yml at repo root)
├── scripts/download_model.py    # pre-warm the model cache
├── pyproject.toml               # hatchling build, dev group (PEP 735)
├── Dockerfile                   # CPU image, non-root, /data volume
├── CHANGELOG.md                 # Keep a Changelog format
├── RESEARCH.md                  # DTW methodology + MeTRAbs self-hosting R&D log
├── AUTHORS.md
├── CITATION.cff
└── LICENSE                      # MIT

Architecture

Three stages, one module each:

  • estimator — per-video worker. Streams frames from an input video via OpenCV, runs MeTRAbs on each frame, and returns a validated VideoPredictions (per-frame boxes, poses3d, poses2d plus a VideoMetadata envelope with frame count, fps, and resolution). Pure library — no filesystem semantics.
  • interfacer — filesystem-polling daemon. Watches the configured input directory for new job subdirectories, dispatches each to an Estimator, and persists job state (status.json) across crashes and restarts. Single instance enforced via fcntl.flock. Owns the input → output → failed directory lifecycle.
  • analyzer — post-processing subpackage. FastDTW-based motion comparison (dtw_all, dtw_per_joint, dtw_relation) and joint-angle / feature-statistics helpers. Pure functions operating on VideoPredictions. Heavy dependencies (fastdtw, scipy) are lazy-imported so import neuropose.analyzer works without the analysis extra.

Configuration is centralized in src/neuropose/config.py (a pydantic-settings Settings class). The runtime data directory defaults to $XDG_DATA_HOME/neuropose/jobs/ and never lives inside the repository.

Development setup

Requires Python 3.11 and uv.

git clone https://git.levineuwirth.org/neuwirth/neuropose.git
cd neuropose
uv sync --group dev

uv sync --group dev creates .venv/ automatically and installs the runtime stack (pydantic, typer, OpenCV, TensorFlow, matplotlib) plus the full dev toolchain (pytest, ruff, pyright, pre-commit, mkdocs-material, fastdtw, scipy). First sync downloads ~600 MB of TensorFlow; subsequent runs hit the uv cache.

Install the pre-commit hooks:

uv run pre-commit install

Running tests

uv run pytest                  # unit tests only (default)
uv run pytest --runslow        # unit + integration; downloads ~2 GB MeTRAbs model
uv run pytest -m "not slow"    # explicitly exclude slow tests

Integration tests live under tests/integration/ and are gated behind @pytest.mark.slow plus a custom --runslow flag implemented in tests/conftest.py. Without the flag, slow tests are skipped at collection time. The first --runslow run downloads the pinned MeTRAbs tarball (~2 GB) into a session-scoped temp cache; subsequent tests in the same run reuse it.

Lint and type-check

uv run ruff check .
uv run ruff format .
uv run pyright

CI runs lint, typecheck, and test as three parallel jobs on every push and PR to main — see .github/workflows/ci.yml.

Docs

uv run mkdocs serve            # live-reload preview at http://127.0.0.1:8000
uv run mkdocs build --strict   # same check CI runs

The API reference pages under docs/api/ are auto-generated from source docstrings via mkdocstrings, so they cannot drift out of sync.