SOMPT22 Challenge
Overview
The SOMPT22 Benchmark Challenge is an open competition for multi-pedestrian tracking on surveillance footage. Participants run their trackers on the test set and submit results for automated evaluation.
- Primary metric: HOTA (Higher Order Tracking Accuracy)
- Evaluation tool: TrackEval
- Results format: MOT Challenge format (
.txtper sequence) - Submission: via GitHub Issue
Datasets
Training Set (public)
Videos and ground-truth annotations are publicly available for download. Use the training set to develop and tune your tracker.
Download Training SetTest Set (images only — no GT)
The test set contains only video frames. Ground-truth annotations are held privately for evaluation. Download the test set, run your tracker, and submit results.
Download Test SetEvaluation Metrics
Result Format
Results must be in MOT Challenge format: one .txt file per test sequence, named after the sequence.
# conf = detection confidence (use 1 if not applicable)
# x, y, z = -1 for 2D tracking
1, 1, 584, 408, 48, 152, 1, -1, -1, -1
1, 2, 731, 392, 55, 143, 1, -1, -1, -1
2, 1, 586, 412, 48, 152, 1, -1, -1, -1
...
File structure expected:
├── SOMPT22-01.txt
├── SOMPT22-02.txt
├── SOMPT22-03.txt
└── ...
Compress the results/ folder as a .zip file before submitting.
How to Submit
- Run your tracker on all SOMPT22 test sequences and prepare result files in MOT format.
- Upload your results as a
.zipfile to a publicly accessible location (Google Drive, Dropbox, GitHub Releases, etc.). - Open a GitHub Issue on this repository using the Benchmark Submission template. Fill in all required fields including the public download link to your results zip.
- Automated evaluation will run within a few minutes. Results will be posted as a comment on your issue.
- Leaderboard update: accepted submissions are added to the leaderboard automatically.
Rules
- Test GT is private — do not attempt to obtain or use test ground-truth labels.
- One active submission per team per week — wait for evaluation results before resubmitting.
- Paper or technical report required for top-3 entries (link or arXiv preprint accepted).
- Trackers may use any publicly available pretrained model. External private training data must be declared.
- Reported FPS must be measured on a standard GPU (e.g., RTX 3090 or equivalent). Declare your hardware.
- The organizers reserve the right to re-evaluate submissions or request code for reproducibility.
Questions?
Open a GitHub Discussion or contact via LinkedIn.
Evaluation powered by TrackEval — HOTA metric by Luiten et al.