I’m facing a TypeError
when trying to initialize the BYTETracker
in my live tracking script using the ByteTrack library. The error specifically occurs in the __init__
method of the BYTETracker
class.
Here’s the relevant part of my code:
trackers = [BYTETracker(ByteTrackArgument), BYTETracker(ByteTrackArgument), BYTETracker(ByteTrackArgument)]
And the error message I’m encountering is:
TypeError: unsupported operand type(s) for +: 'type' and 'float'
I’ve attempted to resolve it by creating an instance of ByteTrackArgument
like this:
trackers = [BYTETracker(ByteTrackArgument()), BYTETracker(ByteTrackArgument()), BYTETracker(ByteTrackArgument())]
However, the issue persists. It’s worth noting that I’m utilizing FaceDetectorYN
from OpenCV for face detection in my script.
Any insights on what might be causing this error, especially in conjunction with the use of FaceDetectorYN
, and how to address it?
Additional Context:
- I’m using the ByteTrack library for live tracking. link
- The error occurs in the
__init__
method of theBYTETracker
class. - I’m passing an instance of
ByteTrackArgument
to theBYTETracker
constructor. - Face detection is performed using
FaceDetectorYN
from OpenCV. link
here is my full code :
import cv2
from bytetracker import BYTETracker
import numpy as np
print("OpenCV version", cv2.__version__)
class ByteTrackArgument:
track_thresh = 0.5
track_buffer = 50
match_thresh = 0.8
aspect_ratio_thresh = 10.0
min_box_area = 1.0
mot20 = False
MIN_THRESHOLD = 0.5 # Adjust this threshold as needed
# Initialize ByteTrackArgument
byte_track_argument = ByteTrackArgument()
# Initialize BYTETracker
trackers = [BYTETracker(ByteTrackArgument()), BYTETracker(ByteTrackArgument()), BYTETracker(ByteTrackArgument())]
def start_webcam_tracking():
cap = cv2.VideoCapture(0) # Use 0 for default webcam, or provide the webcam URL
if not cap.isOpened():
print("Error: Could not open the camera.")
return
while True:
ret, frame = cap.read()
if not ret:
print("Error: Could not read frame from the camera.")
break
# Face detection code
detector = cv2.FaceDetectorYN.create(r"C:\Users\gratu\live tracker\face_detection_yunet_2023mar.onnx", "", (2200, 1200), score_threshold=MIN_THRESHOLD)
img_W = int(frame.shape[1])
img_H = int(frame.shape[0])
detector.setInputSize((img_W, img_H))
detections = detector.detect(frame)[1]
if detections is not None:
for detection in detections:
x, y, width, height = map(int, detection[:4])
cv2.rectangle(frame, (x, y), (x + width, y + height), (0, 255, 0), 2)
# Update the tracker with face bounding boxes
tracker.update(np.array([[x, y, x + width, y + height]]), [frame.shape[0], frame.shape[1]])
# Get tracked results from BYTETracker
online_targets = tracker.get_online_targets()
if online_targets is not None:
for target in online_targets:
x, y, x2, y2 = target # Modify this part based on BYTETracker's output format
cv2.rectangle(frame, (x, y), (x2, y2), (255, 0, 0), 2)
cv2.imshow('Webcam with Face Detection and Tracking', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
start_webcam_tracking()
this was the tutorial that i tried to replicate, they have used yolox for person detection, I tried to do the same for face detection