Skip to content

ROC AUC scores don't match to those of sklearn #211

@abudis

Description

@abudis

Hey,

I've tried using the package on my dataset, which is binary classification problem, but the ROC AUC scores produced by HPH don't match to the scores produced by a simple CV loop from sklearn.

The difference is about 0.1, which is way too large for this to be explained by random splits.
I tried multiple packages - XGBoost, LGBM, CatBoost, and they all produce similar result: HPH CV AUC is about 0.1 lower than the CV AUC calculated by sklearn's cross_validate.

I use the same hyperparameters to cross validate in sklearn and in HPH, therefore this cannot be explained by the different hyperparams neither.

On HPH side, I only used CVExperiment for now.

Could you please point me in the direction, where the difference may come from?

Cheers,
Artem

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions