Description
During memory bank rebuilding in training mode, PatchCore model throws:
AttributeError: 'Tensor' object has no attribute 'pred_score'
The model should return embeddings (tensor) in training mode, but somewhere in the code path it’s trying to access .pred_score, which only exists on InferenceBatch objects.
Environment:
- Anomalib version: 2.1.0
- PyTorch version: 2.7.1+cu118
- Python version: 3.13.7
- OS: Windows 11
Steps to Reproduce
- Load a saved PatchCore model from checkpoint
- Set model to training mode with model.train()
- Process training images to rebuild memory bank:
with torch.no_grad():
result = model(image_tensor)
Expected Behavior
In training mode, model.forward() should return embeddings (tensor) and populate the memory bank without errors.
Actual Behavior
AttributeError: 'Tensor' object has no attribute 'pred_score'
Code Snippet
from anomalib.models import Patchcore
# Load model from checkpoint
model = Patchcore(...)
model.load_state_dict(checkpoint['model_state_dict'])
model.train() # Set to training mode
# Process image
image_tensor = torch.randn(1, 3, 224, 224)
with torch.no_grad():
result = model(image_tensor) # Error occurs here
Analysis
Looking at the PatchCore forward method, expected flow is:
def forward(self, input_tensor):
# ... processing ...
if self.training:
return embedding # Should return tensor
else:
return InferenceBatch(pred_score=score, anomaly_map=map)
The error suggests:
The model is in training mode
It correctly returns a tensor (embeddings)
But downstream code is trying to treat this tensor as an InferenceBatch
Potential Root Cause
Inference-specific logic is being applied even when the model is in training mode, leading to an attempt to access .pred_score on a plain tensor.
Workaround
Using TiledInference works correctly, but direct model usage fails.
Additional Context
This occurs specifically when rebuilding the memory bank from training data after loading a saved model. The memory bank needs to be populated with features from normal training images for PatchCore to work during inference.
Request
Could you help identify where in the PatchCore forward pass (or related code) .pred_score is being accessed during training mode? The model should only return tensors in training mode and not attempt to access inference-specific attributes.
Description
During memory bank rebuilding in training mode, PatchCore model throws:
AttributeError: 'Tensor' object has no attribute 'pred_score'The model should return embeddings (tensor) in training mode, but somewhere in the code path it’s trying to access .pred_score, which only exists on InferenceBatch objects.
Environment:
Steps to Reproduce
Expected Behavior
In training mode, model.forward() should return embeddings (tensor) and populate the memory bank without errors.
Actual Behavior
AttributeError: 'Tensor' object has no attribute 'pred_score'Code Snippet
Analysis
Looking at the PatchCore forward method, expected flow is:
The error suggests:
The model is in training mode
It correctly returns a tensor (embeddings)
But downstream code is trying to treat this tensor as an InferenceBatch
Potential Root Cause
Inference-specific logic is being applied even when the model is in training mode, leading to an attempt to access .pred_score on a plain tensor.
Workaround
Using TiledInference works correctly, but direct model usage fails.
Additional Context
This occurs specifically when rebuilding the memory bank from training data after loading a saved model. The memory bank needs to be populated with features from normal training images for PatchCore to work during inference.
Request
Could you help identify where in the PatchCore forward pass (or related code) .pred_score is being accessed during training mode? The model should only return tensors in training mode and not attempt to access inference-specific attributes.