Skip to content

Support for Turing (pre 8.0) nvidia GPU models #55

@phenobarbital

Description

@phenobarbital

I have an RTX 5000 (16GB VRAM) card, is a Turing chipset but Flash-Attention only work with Ampere (> 8) models.

can we add SDPA as Fallback or FlashInfer (that already support Turing-based cards)?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions