Thanks a lot for open-sourcing the code. I follow the instruction of SSv2 to prepare the Kinetics. Both the validation set and test set category ID starts from 0, as mentioned in the issue. On Kinetics, I follow the same parameters used in SSv2 to train:
python -u main.py kinetics RGB --arch resnet50 --num_segments 8 --lr 0.0001 --lr_steps 20 --epochs 25 --batch-size 32 --workers 2 --dropout 0.5 --root_log checkpoints/kinetics_weights/ --root_model checkpoints/kinetics_weights/ --wd 0.0005 --gpus 0 --episodes 600
I train on 1 A100 GPU with PyTorch 1.10.0. Here is the training log.csv.
I used the checkpoint best performed on the validation set ckpt24.best.pth.tar for testing. The 1-shot mean accuracy is 72.47, 1.79 lower than reported 74.26. But on SSv2 I obtain 44.30, higher than 43.82 reported in the paper. I'm not sure about the reason.
Could you please share the parameters for training the Kinetics?
Thanks a lot for open-sourcing the code. I follow the instruction of SSv2 to prepare the Kinetics. Both the validation set and test set category ID starts from 0, as mentioned in the issue. On Kinetics, I follow the same parameters used in SSv2 to train:
I train on 1 A100 GPU with PyTorch 1.10.0. Here is the training log.csv.
I used the checkpoint best performed on the validation set
ckpt24.best.pth.tarfor testing. The 1-shot mean accuracy is 72.47, 1.79 lower than reported 74.26. But on SSv2 I obtain 44.30, higher than 43.82 reported in the paper. I'm not sure about the reason.Could you please share the parameters for training the Kinetics?