Skip to content

Merge model-cards repository#650

Closed
doringeman wants to merge 114 commits intodocker:mainfrom
doringeman:merge-model-cards
Closed

Merge model-cards repository#650
doringeman wants to merge 114 commits intodocker:mainfrom
doringeman:merge-model-cards

Conversation

@doringeman
Copy link
Copy Markdown
Contributor

@doringeman doringeman commented Feb 10, 2026

Consolidates the standalone docker/model-cards repository into model-cards/ using git subtree add, preserving full commit history (110 commits).

Relocates publish-model-card.yml workflow to top-level .github/workflows/ (GitHub Actions requirement) and updates tool paths.

Updates generate-model-card workflow and agent config to write to model-cards/ai/ instead of model-cards/.

ilopezluna and others added 21 commits October 8, 2025 16:09
…ocker#53)

* add Granite Docling model card with description and characteristics

* Apply suggestion from @gemini-code-assist[bot]

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Apply suggestion from @gemini-code-assist[bot]

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Apply suggestion from @gemini-code-assist[bot]

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Apply suggestion from @gemini-code-assist[bot]

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Apply suggestion from @gemini-code-assist[bot]

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* add gpt-oss-safeguard

* Update ai/gpt-oss-safeguard.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* add snowflakw-arctic-embed-l-v2-vllm

* Update ai/snowflakw-arctic-embed-l-v2-vllm.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* add qwen3-embedding

* Update ai/qwen3-embedding.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* Update ai/qwen3-embedding-vllm.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* Update ai/qwen3-embedding-vllm.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update ai/qwen3-embedding-vllm.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* remove unnecessary blank lines in qwen3-embedding and qwen3-embedding-vllm documentation

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* ai: add qwen3-reranker for vLLM

Signed-off-by: Dorin Geman <dorin.geman@docker.com>

* ai: add qwen3-reranker

Signed-off-by: Dorin Geman <dorin.geman@docker.com>

---------

Signed-off-by: Dorin Geman <dorin.geman@docker.com>
* fix endpoint URL for snowflakw-arctic-embed-l-v2-vllm model

* add Ministral 3 model card with detailed specifications and use cases
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
git-subtree-dir: model-cards
git-subtree-mainline: 57355f9
git-subtree-split: 2197498
GitHub Actions only reads workflows from the repository root's
.github/workflows/ directory. Updated tool paths to reflect the new
model-cards/ subtree prefix.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Model cards from the imported repository live under model-cards/ai/,
not model-cards/. Update the generator workflow and agent config to
write to the correct location.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @doringeman, your pull request is larger than the review limit of 150000 diff characters

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @doringeman, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the external docker/model-cards repository directly into the current project, creating a dedicated model-cards/ directory. This consolidation streamlines the management of AI model documentation and related tooling by centralizing the codebase. It also includes necessary updates to CI/CD workflows and configurations to reflect the new repository structure and ensure continued functionality for model card generation and publishing.

Highlights

  • Repository Consolidation: Consolidated the standalone docker/model-cards repository into the main repository under model-cards/, preserving the full commit history using git subtree add.
  • Workflow Relocation: Relocated the publish-model-card.yml GitHub Actions workflow to the top-level .github/workflows/ directory due to GitHub Actions requirements.
  • Model Card Generation Path Update: Updated the generate-model-card workflow and agent configuration to save generated model cards to the new model-cards/ai/ directory.
  • New CLI Tool: Introduced a new CLI tool, model-cards-cli, for working with model cards, including updating variant tables and inspecting model repositories.
  • Model Compatibility Tester: Added a new model-compatibility-tester.sh script to test Docker AI model compatibility on various hardware configurations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .github/agents/model-card-generator.yaml
    • Updated the output path for generated model cards to model-cards/ai/.
  • model-cards/.gitignore
    • Added a new .gitignore file for the consolidated repository.
  • model-cards/.gitkeep
    • Removed the .gitkeep file.
  • model-cards/LICENSE
    • Added the Apache License 2.0 file.
  • model-cards/README.md
    • Added the main README for the Docker AI Models Repository.
  • model-cards/ai/deepcoder-preview.md
    • Added the model card for DeepCoder-14B.
  • model-cards/ai/deepseek-r1-distill-llama.md
    • Added the model card for Deepseek-R1-Distill-Llama.
  • model-cards/ai/deepseek3.2.md
    • Added the model card for DeepSeek-V3.2.
  • model-cards/ai/devstral-small.md
    • Added the model card for Devstral Small 1.1.
  • model-cards/ai/functiongemma-vllm.md
    • Added the model card for FunctionGemma (vLLM optimized deployment).
  • model-cards/ai/functiongemma.md
    • Added the GGUF version model card for FunctionGemma.
  • model-cards/ai/gemma3-qat.md
    • Added the model card for Gemma 3 QAT.
  • model-cards/ai/gemma3.md
    • Added the model card for Gemma 3.
  • model-cards/ai/gemma3n.md
    • Added the model card for Gemma 3n.
  • model-cards/ai/gpt-oss-safeguard.md
    • Added the model card for GPT‑OSS-safeguard.
  • model-cards/ai/gpt-oss.md
    • Added the model card for GPT‑OSS.
  • model-cards/ai/granite-4.0-h-micro.md
    • Added the model card for Granite 4.0 H Micro.
  • model-cards/ai/granite-4.0-h-nano.md
    • Added the model card for Granite 4.0 H Nano.
  • model-cards/ai/granite-4.0-h-small.md
    • Added the model card for Granite 4.0 H Small.
  • model-cards/ai/granite-4.0-h-tiny.md
    • Added the model card for Granite 4.0 H Tiny.
  • model-cards/ai/granite-4.0-micro.md
    • Added the model card for Granite 4.0 Micro.
  • model-cards/ai/granite-4.0-nano.md
    • Added the model card for Granite 4.0 Nano.
  • model-cards/ai/granite-docling.md
    • Added the model card for Granite Docling.
  • model-cards/ai/granite-embedding-multilingual.md
    • Added the model card for Granite Embedding Multilingual.
  • model-cards/ai/kimi-k2-vllm.md
    • Added the model card for Kimi K2 (vLLM optimized deployment).
  • model-cards/ai/kimi-k2.md
    • Added the GGUF version model card for Kimi K2.
  • model-cards/ai/llama3.1.md
    • Added the model card for Llama 3.1.
  • model-cards/ai/llama3.2.md
    • Added the model card for Llama 3.2 Instruct.
  • model-cards/ai/llama3.3.md
    • Added the model card for Llama 3.3.
  • model-cards/ai/magistral-small-3.2.md
    • Added the model card for Mistral-Small-3.2-24B-Instruct-2506.
  • model-cards/ai/ministral3.md
    • Added the model card for Ministral 3 Instruct 2512.
  • model-cards/ai/mistral-nemo.md
    • Added the model card for Mistral Nemo Instruct 2407.
  • model-cards/ai/mistral.md
    • Added the model card for Mistral 7B Instruct v0.2.
  • model-cards/ai/moondream2.md
    • Added the model card for Moondream 2.
  • model-cards/ai/mxbai-embed-large.md
    • Added the model card for mxbai-embed-large-v1.
  • model-cards/ai/nomic-embed-text-v1.5.md
    • Added the model card for Nomic Embed Text.
  • model-cards/ai/phi4.md
    • Added the model card for Phi-4.
  • model-cards/ai/qwen2.5.md
    • Added the model card for Qwen2.5-7B Instruct.
  • model-cards/ai/qwen3-coder-next-vllm.md
    • Added the model card for Qwen3-Coder-Next (vLLM optimized deployment).
  • model-cards/ai/qwen3-coder-next.md
    • Added the GGUF version model card for Qwen3-Coder-Next.
  • model-cards/ai/qwen3-coder.md
    • Added the GGUF version model card for Qwen3‑Coder‑30B‑A3B‑Instruct.
  • model-cards/ai/qwen3-embedding-vllm.md
    • Added the model card for Qwen3-Embedding (vLLM optimized deployment).
  • model-cards/ai/qwen3-embedding.md
    • Added the model card for Qwen3-Embedding.
  • model-cards/ai/qwen3-reranker-vllm.md
    • Added the model card for Qwen3-Reranker (vLLM optimized deployment).
  • model-cards/ai/qwen3-reranker.md
    • Added the model card for Qwen3-Reranker.
  • model-cards/ai/qwen3-vl.md
    • Added the GGUF version model card for Qwen3 VL.
  • model-cards/ai/qwen3.md
    • Added the model card for Qwen3.
  • model-cards/ai/qwq.md
    • Added the model card for QwQ.
  • model-cards/ai/seed-oss.md
    • Added the GGUF version model card for Seed-OSS.
  • model-cards/ai/smollm2.md
    • Added the model card for SmolLM2.
  • model-cards/ai/smollm3.md
    • Added the model card for SmolLM3.
  • model-cards/ai/smolvlm.md
    • Added the model card for SmolVLM.
  • model-cards/ai/snowflakw-arctic-embed-l-v2-vllm.md
    • Added the model card for Snowflake's Arctic-embed-l-v2.0.
  • model-cards/logos/.gitignore
    • Added a new .gitignore file for the logos directory.
  • model-cards/logos/byte-seed-120x.svg
    • Added a new SVG logo for ByteDance Seed.
  • model-cards/logos/byte-seed-280x184.svg
    • Added a new SVG logo for ByteDance Seed.
  • model-cards/logos/byte-seed-32x.svg
    • Added a new SVG logo for ByteDance Seed.
  • model-cards/logos/deepseek-120x-hub@2x.svg
    • Added a new SVG logo for DeepSeek.
  • model-cards/logos/deepseek-280x184-overview@2x.svg
    • Added a new SVG logo for DeepSeek.
  • model-cards/logos/deepseek-32x-hub@2x.svg
    • Added a new SVG logo for DeepSeek.
  • model-cards/logos/gemma-120x-hub@2x.svg
    • Added a new SVG logo for Gemma.
  • model-cards/logos/gemma-280x184-overview@2x.svg
    • Added a new SVG logo for Gemma.
  • model-cards/logos/gemma-32x-hub@2x.svg
    • Added a new SVG logo for Gemma.
  • model-cards/logos/hugginfface-120x-hub@2x.svg
    • Added a new SVG logo for Huggingface.
  • model-cards/logos/hugginfface-280x184-overview@2x.svg
    • Added a new SVG logo for Huggingface.
  • model-cards/logos/hugginfface-32x-hub@2x.svg
    • Added a new SVG logo for Huggingface.
  • model-cards/logos/ibm-120x-hub.svg
    • Added a new SVG logo for IBM.
  • model-cards/logos/ibm-280x184-overview.svg
    • Added a new SVG logo for IBM.
  • model-cards/logos/ibm-32x.hub.svg
    • Added a new SVG logo for IBM.
  • model-cards/logos/meta-120x-hub@2x.svg
    • Added a new SVG logo for Meta.
  • model-cards/logos/meta-280x184-overview@2x.svg
    • Added a new SVG logo for Meta.
  • model-cards/logos/meta-32x-hub@2x.svg
    • Added a new SVG logo for Meta.
  • model-cards/logos/mistral-120x-hub@2x.svg
    • Added a new SVG logo for Mistral.
  • model-cards/logos/mistral-120x-hub_alt@2x.svg
    • Added a new SVG logo for Mistral.
  • model-cards/logos/mistral-280x184-overview@2x.svg
    • Added a new SVG logo for Mistral.
  • model-cards/logos/mistral-32x-hub@2x.svg
    • Added a new SVG logo for Mistral.
  • model-cards/logos/mistral-32x-hub_alt@2x.svg
    • Added a new SVG logo for Mistral.
  • model-cards/logos/mixedbread-120x-hub@2x.svg
    • Added a new SVG logo for Mixedbread AI.
  • model-cards/logos/mixedbread-280x184-overview@2x.svg
    • Added a new SVG logo for Mixedbread AI.
  • model-cards/logos/mixedbread-32x-hub@2x.svg
    • Added a new SVG logo for Mixedbread AI.
  • model-cards/logos/moondream.svg
    • Added a new SVG logo for Moondream.
  • model-cards/logos/nomic-120x-hub.svg
    • Added a new SVG logo for Nomic AI.
  • model-cards/logos/nomic-280x184-overview.svg
    • Added a new SVG logo for Nomic AI.
  • model-cards/logos/nomic-32x-hub.svg
    • Added a new SVG logo for Nomic AI.
  • model-cards/logos/openai-120x-hub.svg
    • Added a new SVG logo for OpenAI.
  • model-cards/logos/openai-280x184-overview.svg
    • Added a new SVG logo for OpenAI.
  • model-cards/logos/openai-32x-hub.svg
    • Added a new SVG logo for OpenAI.
  • model-cards/logos/phi-120x-hub@2x.svg
    • Added a new SVG logo for Phi.
  • model-cards/logos/phi-280x184-overview@2x.svg
    • Added a new SVG logo for Phi.
  • model-cards/logos/phi-32x-hub@2x.svg
    • Added a new SVG logo for Phi.
  • model-cards/logos/qwen-120x-hub@2x.svg
    • Added a new SVG logo for Qwen.
  • model-cards/logos/qwen-280x184-overview@2x.svg
    • Added a new SVG logo for Qwen.
  • model-cards/logos/qwen-32x-hub@2x.svg
    • Added a new SVG logo for Qwen.
  • model-cards/template.md
    • Added a new markdown template for model cards.
  • model-cards/tools/model-cards-cli/Makefile
    • Added a new Makefile for the model cards CLI.
  • model-cards/tools/model-cards-cli/README.md
    • Added a README for the model cards CLI.
  • model-cards/tools/model-cards-cli/go.mod
    • Added a new Go module file for the CLI.
  • model-cards/tools/model-cards-cli/go.sum
    • Added a new Go sum file for the CLI.
  • model-cards/tools/model-cards-cli/internal/domain/model.go
    • Added a new Go file defining model domain types.
  • model-cards/tools/model-cards-cli/internal/gguf/file.go
    • Added a new Go file for GGUF file parsing.
  • model-cards/tools/model-cards-cli/internal/gguf/parser.go
    • Added a new Go file for GGUF parser implementation.
  • model-cards/tools/model-cards-cli/internal/logger/logger.go
    • Added a new Go file for logging utilities.
  • model-cards/tools/model-cards-cli/internal/markdown/files.go
    • Added a new Go file for markdown file utilities.
  • model-cards/tools/model-cards-cli/internal/markdown/updater.go
    • Added a new Go file for markdown table updating logic.
  • model-cards/tools/model-cards-cli/internal/registry/client.go
    • Added a new Go file for registry client implementation.
  • model-cards/tools/model-cards-cli/internal/utils/format.go
    • Added a new Go file for formatting utilities.
  • model-cards/tools/model-cards-cli/internal/utils/utils.go
    • Added a new Go file for general utilities.
  • model-cards/tools/model-cards-cli/main.go
    • Added the main Go file for the model cards CLI application.
  • model-cards/tools/model-cards-cli/types/types.go
    • Added a new Go file defining model descriptor types.
  • model-cards/tools/model-compatibility-tester/README.md
    • Added a README for the model compatibility tester.
  • model-cards/tools/model-compatibility-tester/test-model-compatibility.sh
    • Added a new shell script for model compatibility testing.
Ignored Files
  • Ignored by pattern: .github/workflows/** (2)
    • .github/workflows/generate-model-card.yml
    • .github/workflows/publish-model-card.yml
Activity
  • The doringeman user created this pull request to merge the docker/model-cards repository.
  • The pull request description provides a clear overview of the changes, including the use of git subtree add and updates to workflows and configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

doringeman added a commit to docker/model-cards that referenced this pull request Feb 10, 2026
All content now lives under model-cards/ in the model-runner monorepo. See docker/model-runner#650.

Signed-off-by: Dorin Geman <dorin.geman@docker.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a significant contribution, merging the standalone model-cards repository into this one. It adds a wealth of model documentation and a new CLI tool. My review focuses on the newly added markdown files, where I've identified several areas for improvement to ensure consistency, correctness, and maintainability. The key issues include broken image links due to absolute URLs pointing to the old repository, inconsistencies in command examples, and some duplicated content in model variant tables. Addressing these points will greatly improve the quality and usability of the documentation.

## 🚀 Models Overview

### DeepCoder Preview
![DeepCoder Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/agentica-120x-hub@2x.png)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logo URLs in this README and other model card files are absolute links to the old docker/model-cards GitHub repository. After this merge, these links will be fragile and likely break. They should be updated to relative paths to point to the new model-cards/logos/ directory within this repository.

For example, ![DeepSeek Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/deepseek-120x-hub@2x.svg) should become ![DeepSeek Logo](logos/deepseek-120x-hub@2x.svg).

Furthermore, some of the logos referenced (like for DeepCoder Preview and GPT-OSS Safeguard) are not present in the model-cards/logos directory in this pull request, which will result in broken images.

📌 **Description:**
24B multimodal instruction model by Mistral AI, tuned for accuracy, tool use & fewer repeats.

📂 **Model File:** [`ai/magistral-small-3.2.md`](ai/magistral-small-2506.md)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The link for the model file is broken. The link text ai/magistral-small-3.2.md is correct, but the target points to ai/magistral-small-2506.md, which does not exist.

Suggested change
📂 **Model File:** [`ai/magistral-small-3.2.md`](ai/magistral-small-2506.md)
📂 **Model File:** [`ai/magistral-small-3.2.md`](ai/magistral-small-3.2.md)

24B multimodal instruction model by Mistral AI, tuned for accuracy, tool use & fewer repeats.

📂 **Model File:** [`ai/magistral-small-3.2.md`](ai/magistral-small-2506.md)
🐳 **Docker Hub:** [`docker.io/ai/magistral-small-3.2`](https://hub.docker.com/r/ai/magistral-small-2506)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The Docker Hub link for this model appears to be incorrect. The link target .../magistral-small-2506 does not match the model name magistral-small-3.2.

Suggested change
🐳 **Docker Hub:** [`docker.io/ai/magistral-small-3.2`](https://hub.docker.com/r/ai/magistral-small-2506)
🐳 **Docker Hub:** [`docker.io/ai/magistral-small-3.2`](https://hub.docker.com/r/ai/magistral-small-3.2)

First, pull the model:

```bash
docker model pull {model_name}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The command uses a placeholder {model_name}. It should be replaced with the actual model name for the command to be runnable.

Suggested change
docker model pull {model_name}
docker model pull ai/granite-embedding-multilingual

Then run the model:

```bash
docker model run {model_name}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The command uses a placeholder {model_name}. It should be replaced with the actual model name for the command to be runnable.

Suggested change
docker model run {model_name}
docker model run ai/granite-embedding-multilingual

---

### SmolLM 2
![Huggingface Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/hugginfface-120x-hub@2x.svg)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a typo in the logo URL: hugginfface should be huggingface. This typo is also present in the filename itself (hugginfface-120x-hub@2x.svg). Please correct the link and also consider renaming the file model-cards/logos/hugginfface-120x-hub@2x.svg to huggingface-120x-hub@2x.svg for consistency. This issue is also present on line 340.

Suggested change
![Huggingface Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/hugginfface-120x-hub@2x.svg)
![Huggingface Logo](logos/huggingface-120x-hub@2x.svg)

| Model variant | Parameters | Quantization | Context window | VRAM¹ | Size |
|---------------|------------|--------------|----------------|------|-------|
| `ai/deepcoder-preview:latest`<br><br>`ai/deepcoder-preview:14B-Q4_K_M` | 14B | IQ2_XXS/Q4_K_M | 131K tokens | 9.36 GiB | 8.37 GB |
| `ai/deepcoder-preview:14B-Q4_K_M` | 14B | IQ2_XXS/Q4_K_M | 131K tokens | 9.36 GiB | 8.37 GB |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line is a duplicate of the 14B-Q4_K_M variant already listed in the line above. Please remove this redundant entry to avoid confusion. This issue is present in several other model card files as well.

## Use this AI model with Docker Model Runner

```bash
docker model run deepseek-v3.2-vllm
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model name deepseek-v3.2-vllm in the command is inconsistent with the model card's filename (deepseek3.2.md) and the ai/ namespacing convention used in most other model cards. For consistency and clarity, please consider using ai/deepseek3.2. This inconsistency is also present in several other new model cards.

Suggested change
docker model run deepseek-v3.2-vllm
docker model run ai/deepseek3.2

|-----------------------|----------------------------------------------------------------------------------------------------------------------|
| **Provider** | IBM (Granite Embedding Team) |
| **Architecture** | Encoder‑only transformer, XLM‑RoBERTa‑like bi‑encoder |
| **Cutoff date** | Released December 18, 2024:contentReference |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The cutoff date contains what appears to be a placeholder or error (:contentReference). Please remove it or replace it with the correct information.

Suggested change
| **Cutoff date** | Released December 18, 2024:contentReference |
| **Cutoff date** | Released December 18, 2024 |

Comment on lines +11 to +15
- **AI assistance on edge devices**, Running chatbots and virtual assistants with minimal latency on low-power * hardware.
- **Code assistance** , Writing, debugging, and optimizing code on mobile or embedded systems.
- **Content generation** ,Drafting emails, summaries, and creative content on lightweight devices.
- **Low-power AI for smart gadgets**, Enhancing voice assistants on wearables and IoT devices.
- **Edge-based data processing**, Summarizing and analyzing data locally for security and efficiency.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This list has some minor formatting issues: a stray asterisk on the first line, and inconsistent spacing around commas on subsequent lines. The suggested change improves readability and consistency.

Suggested change
- **AI assistance on edge devices**, Running chatbots and virtual assistants with minimal latency on low-power * hardware.
- **Code assistance** , Writing, debugging, and optimizing code on mobile or embedded systems.
- **Content generation** ,Drafting emails, summaries, and creative content on lightweight devices.
- **Low-power AI for smart gadgets**, Enhancing voice assistants on wearables and IoT devices.
- **Edge-based data processing**, Summarizing and analyzing data locally for security and efficiency.
- **AI assistance on edge devices**: Running chatbots and virtual assistants with minimal latency on low-power hardware.
- **Code assistance**: Writing, debugging, and optimizing code on mobile or embedded systems.
- **Content generation**: Drafting emails, summaries, and creative content on lightweight devices.
- **Low-power AI for smart gadgets**: Enhancing voice assistants on wearables and IoT devices.
- **Edge-based data processing**: Summarizing and analyzing data locally for security and efficiency.

ilopezluna pushed a commit to docker/model-cards that referenced this pull request Feb 10, 2026
All content now lives under model-cards/ in the model-runner monorepo. See docker/model-runner#650.

Signed-off-by: Dorin Geman <dorin.geman@docker.com>
@doringeman doringeman closed this Feb 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants