Skip to content

docs: add REST API URL and model-centric eval results to leaderboard guide#2325

Merged
davanstrien merged 2 commits intomainfrom
add-benchmark-api-url
Mar 23, 2026
Merged

docs: add REST API URL and model-centric eval results to leaderboard guide#2325
davanstrien merged 2 commits intomainfrom
add-benchmark-api-url

Conversation

@davanstrien
Copy link
Copy Markdown
Member

@davanstrien davanstrien commented Mar 23, 2026

Follow-up to #2306, addressing @julien-c's suggestions.

  • Add raw REST API URL for discovering official benchmarks (/api/datasets?filter=benchmark:official) — useful for agents and scripting
  • Add model-centric eval results section showing model_info(expand=["evalResults"]) — complements the dataset-centric leaderboard API

davanstrien and others added 2 commits March 23, 2026 12:01
Suggested by Julien in #2306.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Shows how to get all benchmark scores for a single model via
model_info(expand=["evalResults"]), complementing the dataset-centric
leaderboard API.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@davanstrien davanstrien changed the title docs: add REST API URL for discovering benchmarks docs: add REST API URL and model-centric eval results to leaderboard guide Mar 23, 2026
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@davanstrien davanstrien merged commit cb9c62f into main Mar 23, 2026
3 checks passed
@davanstrien davanstrien deleted the add-benchmark-api-url branch March 23, 2026 12:29
print(f"{result.dataset_id}: {result.value}")
```

This returns [`EvalResultEntry`](https://huggingface.co/docs/huggingface_hub/package_reference/hf_api#huggingface_hub.EvalResultEntry) objects parsed from the model's `.eval_results/` files.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no EvalResultEntry documentation yet, probably because it's still experimental. Should we add it here, @hanouticelina @Wauplin, or not yet?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will defer to @hanouticelina @Wauplin!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants