Issue: YOLOv8 .pt models fail during inference in UI
Summary
While testing YOLO-based models in the PerceptionMetrics UI, I ran into runtime errors when using native YOLOv8 .pt models.
The model loads successfully, but inference fails when an image is uploaded.
What happens
I observed two types of errors:
1.
'Tensor' object has no attribute 'boxes'
2.
too many values to unpack (expected 2)
Steps to reproduce
- Run the Streamlit UI
- Select Model Format → YOLO
- Load a YOLOv8
.pt model (e.g. yolov8n.pt)
- Upload any image
- Run inference
The model loads fine, but inference crashes with one of the errors above.
Why this happens
It looks like postprocess_detection() assumes the model output is always a torch.Tensor.
However, YOLOv8 models (Ultralytics) can return:
- a
Results object (with .boxes)
- a
list or tuple
- or a raw tensor
Right now, only the tensor case is handled, so other formats cause crashes.
Expected behavior
- Inference should work for YOLOv8
.pt models
- Different output formats should be handled safely
- Bounding boxes should be correctly processed without errors
Suggested fix
Update postprocess_detection() to:
- handle
Results objects (output.boxes)
- unwrap
list/tuple outputs
- keep existing tensor logic unchanged
Files involved
perceptionmetrics/models/utils/yolo.py
perceptionmetrics/models/torch_detection.py
Notes
- This issue blocks inference for native YOLOv8 models
- TorchScript models seem to work fine
- Fixing this would improve compatibility with Ultralytics models
Happy to help with a PR if this approach looks good
@dpascualhe
Issue: YOLOv8
.ptmodels fail during inference in UISummary
While testing YOLO-based models in the PerceptionMetrics UI, I ran into runtime errors when using native YOLOv8
.ptmodels.The model loads successfully, but inference fails when an image is uploaded.
What happens
I observed two types of errors:
1.
2.
Steps to reproduce
.ptmodel (e.g.yolov8n.pt)The model loads fine, but inference crashes with one of the errors above.
Why this happens
It looks like
postprocess_detection()assumes the model output is always a torch.Tensor.However, YOLOv8 models (Ultralytics) can return:
Resultsobject (with.boxes)listortupleRight now, only the tensor case is handled, so other formats cause crashes.
Expected behavior
.ptmodelsSuggested fix
Update
postprocess_detection()to:Resultsobjects (output.boxes)list/tupleoutputsFiles involved
perceptionmetrics/models/utils/yolo.pyperceptionmetrics/models/torch_detection.pyNotes
Happy to help with a PR if this approach looks good
@dpascualhe