A curated list of tools, frameworks, resources, and guides for AI compliance, governance, and regulatory risk management.
Covers: EU AI Act · NIST AI RMF · ISO/IEC 42001 · GPAI · SOC 2 AI · IEEE standards
Maintained by @mmilovanovic87 — Associate Professor, University of Niš · 50+ papers on AI systems.
- Frameworks and Regulations
- Self-Assessment Tools
- CI/CD and Compliance-as-Code
- Enterprise Platforms
- Libraries and SDKs
- Datasets and Benchmarks
- Learning Resources
- Newsletters and Communities
- Conferences and Events
- EU AI Act Full Text - Official regulation text.
- EU AI Act Explorer - Interactive article browser.
- EU AI Act Compliance Checker - Official risk classification wizard.
- High-Risk AI Systems (Annex III) - Full list of high-risk use cases.
- GPAI Code of Practice - General-purpose AI model obligations.
- NIST AI RMF 1.0 - Official Risk Management Framework.
- NIST AI RMF Playbook. - Implementation guidance.
- NIST AI RMF Quick Start Guide - Condensed onboarding.
- AI RMF Crosswalk - Mapping to other frameworks.
- ISO 42001 Overview - AI Management System standard.
- ISO 42001 vs EU AI Act Mapping - Crosswalk between standards.
- GapSight - Open-source ML compliance self-assessment.
Maps ML metrics to EU AI Act, NIST AI RMF, and ISO 42001 gaps.
Includes GitHub Action for CI/CD compliance checks. Free, no login.
- VerifyWise - Open-source AI governance platform covering EU AI Act, ISO 42001, NIST AI RMF.
- FRAI - AI compliance checks with Git pre-commit hooks.
- GapSight GitHub Action - Run EU AI Act / NIST AI RMF / ISO 42001 compliance checks in CI/CD pipelines. Generates compliance artifacts alongside test results.
- Systima Comply - Open-source EU AI Act scanner with AST-based detection of 37+ ML frameworks.
- Vanta - Automated compliance platform, includes EU AI Act coverage. From ~$10K/year.
- Credo AI - AI governance platform with policy packs and risk scoring.
- Regulativ.ai - Supports 40+ compliance frameworks.
- Aikido Security - Developer-first security and compliance, includes AI system checks.
- Enzai - Pre-built EU AI Act policy packs and audit workflows.
- Fairlearn - Python library for assessing and improving fairness in ML models.
- AI Fairness 360 (IBM) - Toolkit to detect and mitigate bias in ML models.
- Alibi Detect - Outlier, adversarial, and drift detection.
- Evidently AI - ML model monitoring and evaluation.
- Deepchecks - Testing and monitoring for ML models.
- SHAP - Explainability for ML model outputs.
- LIME - Local interpretable model-agnostic explanations.
- AI Incident Database - Documented real-world AI failures.
- NIST ARIA - Assessing Risks and Impacts of AI benchmark.
- BigBench - Diverse tasks for LLM evaluation.
- EU AI Office - Official EU AI Act implementation body.
- NIST AI Resource Center - Full NIST AI RMF documentation and tools.
- ENISA AI Cybersecurity - EU cybersecurity agency AI guidance.
- EU AI Act Compliance Roadmap for August 2026 Deadline - LegalNodes step-by-step guide.
- The EU AI Act's Hidden Market - €17B market analysis.
- How open-source devtools get monetized - PostHog handbook on open-source business models.
- A Survey of Fairness in Machine Learning - Moritz Hardt et al.
- Interpretable Machine Learning - Christoph Molnar, free book.
- MLOps Community Slack - 27,900+ ML engineers. Best community for production ML.
- DataTalks.Club Slack - 13,300+ data practitioners.
- IAPP AI Governance Community - Privacy and AI governance professionals.
- EU Artificial Intelligence Act Newsletter - Weekly updates on EU AI Act developments.
- TLDR AI - Daily AI newsletter, 500K+ subscribers.
- The Batch (DeepLearning.AI) - Weekly AI news with practical focus.
- IAPP AI Governance Global Europe 2026 - Dublin, June 1–4, 2026. Premier EU AI Act compliance event.
- MLOps World - Annual conference for ML engineering and operations.
- FAccT (ACM) - Fairness, Accountability, and Transparency in ML.
- NeurIPS - Top ML research conference with growing governance track.
Contributions welcome. Please read CONTRIBUTING.md before submitting a PR.
To add a resource: fork the repo, add your entry in the correct section, and open a pull request. Criteria: publicly accessible, actively maintained, genuinely useful for AI compliance practitioners.