RepoPilotOpen in app →

v-lavrenko/dvr

Record videos, filter and annotate, learn models

AVOID

Looks unmaintained — solo project with stale commits

worst-case
Use as dependencyAVOID

no license — legally unclear; last commit was 3y ago…

Fork & modifyAVOID

no license — can't legally use code; no tests detected…

Learn fromGO

Documented and popular — useful reference codebase to read through.

Deploy as-isAVOID

no license — can't legally use code; last commit was 3y ago…

  • Stale — last commit 3y ago
  • Solo or near-solo (1 contributor visible)
  • No license — legally unclear to depend on
  • No CI workflows detected
  • No test directory detected
  • Scorecard: marked unmaintained (0/10)
  • Scorecard: default branch unprotected (0/10)
What would change the verdict?
  • Use as dependency AVOIDWAIT if: publish a permissive license (MIT, Apache-2.0, etc.); 1 commit in the last 365 days
  • Fork & modify AVOIDWAIT if: add a LICENSE file
  • Deploy as-is AVOIDWAIT if: add a LICENSE file

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard

Embed this verdict

[![RepoPilot: AVOID](https://repopilot.app/api/badge/v-lavrenko/dvr)](https://repopilot.app/r/v-lavrenko/dvr)

Paste into your README — the badge live-updates from the latest cached analysis.

Onboarding doc

Onboarding: v-lavrenko/dvr

Generated by RepoPilot · 2026-05-05 · Source

Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/v-lavrenko/dvr shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

Verdict

AVOID — Looks unmaintained — solo project with stale commits

  • ⚠ Stale — last commit 3y ago
  • ⚠ Solo or near-solo (1 contributor visible)
  • ⚠ No license — legally unclear to depend on
  • ⚠ No CI workflows detected
  • ⚠ No test directory detected
  • ⚠ Scorecard: marked unmaintained (0/10)
  • ⚠ Scorecard: default branch unprotected (0/10)

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live v-lavrenko/dvr repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/v-lavrenko/dvr.

What it runs against: a local clone of v-lavrenko/dvr — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in v-lavrenko/dvr | Confirms the artifact applies here, not a fork | | 2 | Default branch master exists | Catches branch renames | | 3 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 4 | Last commit ≤ 1298 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>v-lavrenko/dvr</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of v-lavrenko/dvr. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/v-lavrenko/dvr.git
#   cd dvr
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of v-lavrenko/dvr and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "v-lavrenko/dvr(\\.git)?\\b" \\
  && ok "origin remote is v-lavrenko/dvr" \\
  || miss "origin remote is not v-lavrenko/dvr (artifact may be from a fork)"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "dvr.py" \\
  && ok "dvr.py" \\
  || miss "missing critical file: dvr.py"
test -f "vid.py" \\
  && ok "vid.py" \\
  || miss "missing critical file: vid.py"
test -f "annotate.py" \\
  && ok "annotate.py" \\
  || miss "missing critical file: annotate.py"
test -f "denoise.py" \\
  && ok "denoise.py" \\
  || miss "missing critical file: denoise.py"
test -f "detrain.py" \\
  && ok "detrain.py" \\
  || miss "missing critical file: detrain.py"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 1298 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1268d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/v-lavrenko/dvr"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

dvr is a Python-based video recording, filtering, and annotation pipeline with integrated machine learning model training. It captures video frames, applies denoising filters (see denoise.py and run_denoise.csh), annotates sequences with metadata, and trains detectron-style models (detrain.py, detest.py) on labeled video data for object detection and tracking tasks. Flat structure with core functionality split across task-specific modules: dvr.py and vid.py handle recording/video I/O, denoise.py wraps filtering logic (with saved configuration in denoise.saved.txt), annotate.py manages labeling, and detrain.py/detest.py/dneval.py form a training-evaluation pipeline. vutil.py appears to be shared utilities. The 017mm-denoise.txt suggests parameter tuning logs or model configs.

Who it's for

Computer vision researchers and ML engineers who need to record real-world video datasets, preprocess them with noise reduction, manually or semi-automatically annotate frames, and train custom detection models without building infrastructure from scratch.

Maturity & risk

Experimental/research-stage project. The codebase shows 31KB of Python with minimal test coverage (only detest.py and dneval.py for evaluation), no CI setup visible, and appears to be a single-maintainer research tool (v-lavrenko). Likely used for author's specific research rather than production deployment.

Standard open source risks apply.

Active areas of work

No recent activity indicators (commit messages, open PRs, or issues) are visible in the provided data. The presence of both denoise.py and denoise.saved.txt alongside run_denoise.csh suggests active iteration on the denoising stage, possibly the last worked-on component.

Get running

Clone the repo and inspect dependencies: git clone https://github.com/v-lavrenko/dvr.git && cd dvr. Check for requirements.txt or setup.py to install dependencies (pip install -r requirements.txt or pip install -e .). Review README for video input paths and run python dvr.py or python annotate.py to start recording/annotating.

Daily commands: No Makefile or setup.py visible. Likely: python dvr.py to record, python annotate.py to label, bash run_denoise.csh to filter, python detrain.py to train models, python detest.py to evaluate. Exact commands require inspecting argument parsers in each module.

Map of the codebase

  • dvr.py — Main entry point and orchestrator for the DVR video recording, filtering, and annotation pipeline—all workflows route through this module.
  • vid.py — Core video abstraction and manipulation layer; handles video I/O, frame access, and fundamental video operations used across all other modules.
  • annotate.py — Annotation system for labeling and marking video frames; essential for the labeling workflow that feeds model training.
  • denoise.py — Video denoising module that filters noise from video data; critical preprocessing step in the video processing pipeline.
  • detrain.py — Model training orchestrator; responsible for fitting detection/classification models on annotated video data.
  • dneval.py — Model evaluation and validation module; assesses trained model performance on test video data.
  • vutil.py — Utility functions and helpers shared across video processing, annotation, and model modules; foundational support library.

Components & responsibilities

  • dvr.py (Orchestrator) (Python argparse, subprocess calls) — Routes user commands to appropriate sub-workflows (record → denoise → annotate → train → evaluate); manages global pipeline state.
    • Failure mode: Invalid command routing; incorrect argument passing to downstream modules; uncaught exceptions in one stage halt entire pipeline.
  • vid.py (Video I/O) (OpenCV (cv2), FFmpeg codecs) — Opens video files, exposes frame iterators, handles codec negotiation, manages video metadata (fps, resolution, frame count).
    • Failure mode: Unsupported codec; corrupted video file; insufficient disk I/O bandwidth; frame decode errors halt iteration.
  • denoise.py (Preprocessing Filter) (NumPy, SciPy, OpenCV image filters) — Consumes frame stream from vid.py, applies noise reduction (e.g., bilateral filter, morphological ops), yields cleaned frames.
    • Failure mode: Memory exhaustion on large frames; filter misconfiguration; performance degradation on high-resolution video.
  • annotate.py (Labeling UI) (OpenCV GUI (or Tkinter), Python I/O) — Interactive frame-by-frame viewer; accepts user labels (bounding boxes, class tags, attributes); persists annotations to file.
    • Failure mode: UI blocking on slow video decode; label data corruption; unsaved annotations lost on crash.
  • detrain.py (Model Training) (scikit-learn, TensorFlow/PyTorch (inferred), pickle/joblib serialization) — Loads annotated dataset from disk, splits train/val, trains detection/classification model, saves weights to disk.
    • Failure mode: Out-of-memory on large datasets; non-convergence; corrupted annotation data causes training to fail; no checkpointing.
  • dneval.py (Evaluation) (scikit-learn metrics, model loading/inference) — Loads trained model weights, applies to test video frames, computes metrics (accuracy, precision, recall, F1).
    • Failure mode: Model weights not found; test data missing; metric computation errors on edge cases (no positive labels, etc.).

How to make changes

Add a new video filter or preprocessing step

  1. Create a new Python module (e.g., filter_new.py) with a class that inherits from or follows the pattern of denoise.py (denoise.py)
  2. Implement apply() or process() method that accepts a video frame generator from vid.py and yields filtered frames (vid.py)
  3. Register the new filter in dvr.py's pipeline orchestration logic so it can be invoked from the CLI (dvr.py)
  4. Add utility helpers in vutil.py if the filter requires shared frame operations (vutil.py)

Add a new detection or classification model

  1. Create a new model class in detrain.py that loads/initializes your model architecture and implements a train() method (detrain.py)
  2. Implement inference logic in detest.py that loads the trained model and applies it to video frames (detest.py)
  3. Add evaluation metrics computation in dneval.py to assess model performance on held-out video test sets (dneval.py)
  4. Wire model selection into dvr.py so users can choose and train the new model via CLI arguments (dvr.py)

Extend the annotation tool with new label types or workflows

  1. Define new label schemas and metadata fields in annotate.py's data structures (annotate.py)
  2. Add UI controls or keyboard shortcuts in annotate.py for new label input methods (annotate.py)
  3. Update dvr.py to pass annotation configuration to the annotate module based on CLI flags (dvr.py)
  4. If frame preprocessing is needed before annotation, leverage vutil.py helpers or add new ones (vutil.py)

Why these technologies

  • Python — Dominant language for video processing, ML workflows, and rapid prototyping; integrates easily with OpenCV, TensorFlow/scikit-learn.
  • OpenCV (inferred from vid.py) — Industry-standard library for video codec handling, frame iteration, and basic image transformations needed for video I/O.
  • NumPy/SciPy (inferred) — Efficient array operations and signal processing for frame data, denoising filters, and numerical computations.
  • scikit-learn or TensorFlow (inferred from detrain.py/dneval.py) — Standard ML frameworks for training detection/classification models and computing performance metrics.

Trade-offs already made

  • Monolithic pipeline orchestrated in dvr.py rather than microservices

    • Why: Simpler deployment and configuration for a research/prototyping tool; video processing is I/O-bound, not network-bound.
    • Consequence: Limited horizontal scaling; all stages run on a single machine; easier to debug end-to-end workflows.
  • Interactive annotation (annotate.py) tightly coupled to pipeline rather than separate service

    • Why: Immediate feedback loop: users annotate, see results, retrain—reduces iteration time for model development.
    • Consequence: Annotation and training cannot be parallelized; UI blocking during model training.
  • Saved model state stored in plain text files (denoise.saved.txt, 017mm-denoise.txt)

    • Why: Simplicity and human readability for research and debugging; avoids binary serialization complexity.
    • Consequence: May not scale well for large models; version control friendly but no built-in versioning; no integrity guarantees.

Non-goals (don't propose these)

  • Real-time video streaming or live inference
  • Distributed or cloud-based training
  • Multi-camera synchronization or 360° video support
  • Audio processing or multimodal fusion
  • Web UI or REST API endpoints
  • GPU acceleration (though underlying libraries may support it)

Traps & gotchas

run_denoise.csh is a C-shell script (not bash) and may assume specific directory layout or environment variables (e.g., input video path, output directory). denoise.saved.txt likely contains serialized hyperparameters or model weights—deleting it will reset denoiser state. No setup.py or requirements.txt visible: dependencies must be inferred from imports (likely OpenCV, PyTorch, NumPy). Video codec support depends on system ffmpeg/libav installation.

Architecture

Concepts to learn

  • Bilateral Filtering — Core denoising technique in dvr; reduces noise while preserving edges—critical for maintaining annotation quality without blurring object boundaries
  • Frame Codec (H.264/VP9) — vid.py must select appropriate codec for video compression; affects file size, inference latency, and compatibility across playback systems
  • Anchor Boxes (Object Detection) — detrain.py likely uses anchor-based detection (Faster R-CNN style); understanding box parameterization is key to interpreting model predictions and annotation formats
  • Mean Average Precision (mAP) — dneval.py computes mAP to measure detection quality; required to compare model iterations and validate training improvements
  • Image Pyramid / Multi-scale Processing — denoise.py may use multi-resolution denoising; vid.py frame resizing suggests model inference at fixed input scales—understanding scale tradeoffs is essential for deployment
  • Batch Normalization — Core layer in detrain.py model backbone; affects convergence, gradient flow, and inference-time batch dependency—critical for checkpoint portability
  • Non-Maximum Suppression (NMS) — Post-processing in detest.py to filter overlapping detections; tuning NMS threshold impacts precision-recall tradeoff and final model quality

Related repos

  • facebookresearch/detectron2 — Industry-standard object detection framework that detrain.py/detest.py likely build upon for model architecture and training loops
  • opencv/opencv — Core dependency for video codec handling, frame operations, and potentially annotation UI (cv2.imshow, mouse callbacks)
  • pytorch/pytorch — Deep learning framework underlying model training and inference in detrain.py and detest.py
  • scikit-image/scikit-image — Likely used for image filtering operations and denoising algorithms in denoise.py (bilateral filtering, morphological ops)
  • facebookresearch/DINO — Self-supervised detection backbone that could augment training pipeline if models use transformer encoders instead of ResNet

PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Create unit tests for dvr.py core module

The repo has detest.py which suggests testing infrastructure exists, but there are no visible test files for the main dvr.py module. Given that dvr.py appears to be the core orchestrator for video recording, filtering, and annotation, it needs dedicated unit tests. This would improve reliability for contributors and catch regressions.

  • [ ] Examine detest.py to understand the testing framework and conventions used
  • [ ] Create test_dvr.py with unit tests for main functions in dvr.py
  • [ ] Add tests for video I/O operations (recording initialization, file handling)
  • [ ] Add tests for filter/annotation pipeline integration
  • [ ] Run tests locally and document how to run via test command in README

Consolidate denoise functionality and remove redundancy

The repo has three closely-named denoise files: denoise.py, denoise.saved.txt, and 017mm-denoise.txt. This suggests incomplete refactoring or versioning. Cleaning this up would improve code clarity and maintainability. The saved.txt and mm-denoise.txt files are likely backups or configuration artifacts that should be either integrated or documented.

  • [ ] Investigate what 017mm-denoise.txt and denoise.saved.txt contain (are they configs, old versions, or test data?)
  • [ ] If they are configs: merge into a proper denoise_config.txt or YAML file and document in README
  • [ ] If they are backups: remove them and add .gitignore rule for *.saved.txt
  • [ ] Add docstrings to denoise.py explaining its role in the pipeline
  • [ ] Update any scripts that call denoise.py to reference the cleaned-up module

Add integration tests for the video annotation pipeline (annotate.py + vid.py + dvr.py)

The presence of both annotate.py and vid.py suggests a video annotation workflow, but there are no visible integration tests validating the full pipeline. Add a test that exercises annotation on a small sample video to catch breakage early and document the expected workflow.

  • [ ] Create test_annotation_pipeline.py in tests/ directory
  • [ ] Generate or download a tiny sample video file (e.g., 1-2 second clip) for test fixtures
  • [ ] Write an integration test that: loads sample video → applies annotation → validates output format/metadata
  • [ ] Reference detrain.py and dneval.py patterns to understand how the annotation models are validated
  • [ ] Document in README how to run integration tests and where test fixtures are stored

Good first issues

  • Add unit tests for vid.py (test frame reading, codec handling, resolution validation) since video I/O is critical but untested.
  • Document the denoising pipeline: clarify what 017mm-denoise.txt and denoise.saved.txt contain, and add --help examples to run_denoise.csh.
  • Create a requirements.txt with pinned versions by inspecting all imports in dvr.py, vid.py, denoise.py, and annotate.py, then validate with pip check.

Top contributors

Recent commits

  • 4979c8b — history of simple denoising runs (v-lavrenko)
  • 4de1191 — denoising parameters that worked well (v-lavrenko)
  • 0f3f9b5 — parameter sweep for simple denoising (v-lavrenko)
  • bffeefe — evaluate simple denoising (v-lavrenko)
  • 441741e — simple denoising / background subtraction algorithms (v-lavrenko)
  • 96b44b7 — apply a pre-trained neural net to video (v-lavrenko)
  • e74cccd — train a neural net from video annotations (v-lavrenko)
  • 954aebe — tool for annotating objects in videos (v-lavrenko)
  • 355111c — stub for abstracting away common functions (v-lavrenko)
  • da35f95 — apply simple filters to pre-recorded videos (v-lavrenko)

Security observations

Security analysis is limited due to incomplete code visibility. The codebase appears to be a Python-based video processing and machine learning toolkit (DVR - Digital Video Recording with ML). Without access to actual source code, dependencies, and configuration files, a comprehensive security assessment cannot be performed. Primary concerns include: (1) Unknown third-party dependencies and their security status, (2) Potential input validation issues in video processing pipelines, (3) File I/O operations that may not properly validate paths (path traversal risk), (4) Shell script execution without visible security controls. Recommend providing complete source code, dependency manifests, and Docker configuration for thorough security audit.

  • Medium · Insufficient Code Visibility for Security Analysis — Repository root - all Python files. The provided file structure shows only partial codebase content without actual source code. Critical files like dvr.py, denoise.py, annotate.py, and vid.py are listed but their contents are not available for static analysis. This prevents identification of potential injection vulnerabilities, unsafe file operations, or insecure API usage patterns. Fix: Provide complete source code of main modules for comprehensive security analysis, particularly: dvr.py, denoise.py, annotate.py, vid.py, vutil.py, and detrain.py
  • Medium · Missing Dependency Information — Repository root. No package dependencies file (requirements.txt, setup.py, pyproject.toml, or Pipfile) was provided. Cannot assess for known vulnerable dependencies, outdated libraries, or insecure package versions. Fix: Provide dependencies file and run security audits using: pip-audit, safety, or Dependabot to identify vulnerable packages
  • Medium · Potential Unvetted Shell Script — run_denoise.csh. The file 'run_denoise.csh' is a C-shell script without visible content. Shell scripts can be vectors for command injection if they process user input or external data unsafely. Fix: Review shell script for: proper input validation, use of quoted variables, avoidance of eval/exec, and secure permission settings (chmod 755 maximum)
  • Low · Configuration File Present Without Review — denoise.saved.txt, 017mm-denoise.txt. Files named 'denoise.saved.txt' and '017mm-denoise.txt' may contain configuration or model data. If these contain sensitive parameters or paths, they could leak information. Fix: Review these files for sensitive data; if they contain credentials or paths, move to .gitignore and use environment variables or secure config management
  • Low · No Docker Security Context Provided — Dockerfile (if exists). Docker security configuration was not provided for review. Cannot assess for issues like running as root, exposed ports, or insecure base images. Fix: If containerized, provide Dockerfile and docker-compose.yml for analysis of: base image security, user permissions, exposed ports, and build-time secrets

LLM-derived; treat as a starting point, not a security audit.

Where to read next


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.