Skip to content

Hyper-parameters setting #162

@YokiDia

Description

@YokiDia

Hi! Thank you for the great work.
I’m fine-tuning this model, and using the checkpoint trained with focal seems to lower performance on COCO and RefCOCOg. Could the batch size be influencing this as well?
I am training on 4 GPUs, the setting is "TRAIN.BATCH_SIZE_TOTAL 20 \ TRAIN.BATCH_SIZE_PER_GPU 5 \ DATALOADER_NUM_WORKERS 4" ,and results on RefCOCOg (~62-63% cIoU),COCO(~38.1% mAP, ~60.7% mIoU).
Would increasing the epoch parameter, reducing the LR_MULTIPLIER for the backbone (to 0.05), or lowering WARMUP_ITERS(to 5) be helpful?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions