GitHub - roboflow/trackers: A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithms (original) (raw)
trackers
Hello
trackers
is a unified library offering clean room re-implementations of leading multi-object tracking algorithms. Its modular design allows you to easily swap trackers and integrate them with object detectors from various libraries like inference
, ultralytics
, or transformers
.
trackers-2.0.0-promo.mp4
Installation
Pip install the trackers
package in a Python>=3.9 environment.
install from source
By installing trackers
from source, you can explore the most recent features and enhancements that have not yet been officially released. Please note that these updates are still in development and may not be as stable as the latest published release.
pip install git+https://github.com/roboflow/trackers.git
Quickstart
With a modular design, trackers
lets you combine object detectors from different libraries with the tracker of your choice. Here's how you can use SORTTracker
with various detectors:
import supervision as sv from trackers import SORTTracker from inference import get_model
tracker = SORTTracker() model = get_model(model_id="yolov11m-640") annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
def callback(frame, _): result = model.infer(frame)[0] detections = sv.Detections.from_inference(result) detections = tracker.update(detections) return annotator.annotate(frame, detections, labels=detections.tracker_id)
sv.process_video( source_path="", target_path="", callback=callback, )
run with ultralytics
import supervision as sv from trackers import SORTTracker from ultralytics import YOLO
tracker = SORTTracker() model = YOLO("yolo11m.pt") annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
def callback(frame, _): result = model(frame)[0] detections = sv.Detections.from_ultralytics(result) detections = tracker.update(detections) return annotator.annotate(frame, detections, labels=detections.tracker_id)
sv.process_video( source_path="", target_path="", callback=callback, )
run with transformers
import torch import supervision as sv from trackers import SORTTracker from transformers import RTDetrV2ForObjectDetection, RTDetrImageProcessor
tracker = SORTTracker() image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_v2_r18vd") model = RTDetrV2ForObjectDetection.from_pretrained("PekingU/rtdetr_v2_r18vd") annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
def callback(frame, _): inputs = image_processor(images=frame, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs)
h, w, _ = frame.shape
results = image_processor.post_process_object_detection(
outputs,
target_sizes=torch.tensor([(h, w)]),
threshold=0.5
)[0]
detections = sv.Detections.from_transformers(
transformers_results=results,
id2label=model.config.id2label
)
detections = tracker.update(detections)
return annotator.annotate(frame, detections, labels=detections.tracker_id)
sv.process_video( source_path="", target_path="", callback=callback, )
License
The code is released under the Apache 2.0 license.
Contribution
We welcome all contributions—whether it’s reporting issues, suggesting features, or submitting pull requests. Please read our contributor guidelines to learn about our processes and best practices.