SOMPT22 Benchmark Challenge

Submit your multi-pedestrian tracking results and compete on the leaderboard

SOMPT22 Challenge

Overview

The SOMPT22 Benchmark Challenge is an open competition for multi-pedestrian tracking on surveillance footage. Participants run their trackers on the test set and submit results for automated evaluation.


Datasets

Training Set (public)

Videos and ground-truth annotations are publicly available for download. Use the training set to develop and tune your tracker.

Download Training Set  link

Test Set (images only — no GT)

The test set contains only video frames. Ground-truth annotations are held privately for evaluation. Download the test set, run your tracker, and submit results.

Download Test Set  link

Evaluation Metrics

HOTA
Higher Order Tracking Accuracy — primary ranking metric
DetA
Detection Accuracy (sub-metric of HOTA)
AssA
Association Accuracy (sub-metric of HOTA)
MOTA
Multiple Object Tracking Accuracy (CLEAR metric)
IDF1
ID F1 Score — identity preservation
FP / FN / IDs
False Positives, False Negatives, ID Switches

Result Format

Results must be in MOT Challenge format: one .txt file per test sequence, named after the sequence.

# Format: frame, id, bb_left, bb_top, bb_width, bb_height, conf, x, y, z
# conf = detection confidence (use 1 if not applicable)
# x, y, z = -1 for 2D tracking
1, 1, 584, 408, 48, 152, 1, -1, -1, -1
1, 2, 731, 392, 55, 143, 1, -1, -1, -1
2, 1, 586, 412, 48, 152, 1, -1, -1, -1
...

File structure expected:

results/
├── SOMPT22-01.txt
├── SOMPT22-02.txt
├── SOMPT22-03.txt
└── ...

Compress the results/ folder as a .zip file before submitting.

⚠️ Important: Sequence names must exactly match the SOMPT22 test sequence names. Use the sequence list provided in the test set README.

How to Submit

  1. Run your tracker on all SOMPT22 test sequences and prepare result files in MOT format.
  2. Upload your results as a .zip file to a publicly accessible location (Google Drive, Dropbox, GitHub Releases, etc.).
  3. Open a GitHub Issue on this repository using the Benchmark Submission template. Fill in all required fields including the public download link to your results zip.
  4. Automated evaluation will run within a few minutes. Results will be posted as a comment on your issue.
  5. Leaderboard update: accepted submissions are added to the leaderboard automatically.

Open Submission Issue  link


Rules

  1. Test GT is private — do not attempt to obtain or use test ground-truth labels.
  2. One active submission per team per week — wait for evaluation results before resubmitting.
  3. Paper or technical report required for top-3 entries (link or arXiv preprint accepted).
  4. Trackers may use any publicly available pretrained model. External private training data must be declared.
  5. Reported FPS must be measured on a standard GPU (e.g., RTX 3090 or equivalent). Declare your hardware.
  6. The organizers reserve the right to re-evaluate submissions or request code for reproducibility.

Questions?

Open a GitHub Discussion or contact via LinkedIn.


Evaluation powered by TrackEval — HOTA metric by Luiten et al.