Skip to content

mmilovanovic87/awesome-ai-compliance

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

Awesome AI Compliance Awesome

A curated list of tools, frameworks, resources, and guides for AI compliance, governance, and regulatory risk management.

Covers: EU AI Act · NIST AI RMF · ISO/IEC 42001 · GPAI · SOC 2 AI · IEEE standards

Maintained by @mmilovanovic87 — Associate Professor, University of Niš · 50+ papers on AI systems.


Contents


Frameworks and Regulations

EU AI Act

NIST AI RMF

ISO/IEC 42001


Self-Assessment Tools

  • GapSight - Open-source ML compliance self-assessment. Maps ML metrics to EU AI Act, NIST AI RMF, and ISO 42001 gaps. Includes GitHub Action for CI/CD compliance checks. Free, no login. GitHub
  • VerifyWise - Open-source AI governance platform covering EU AI Act, ISO 42001, NIST AI RMF.
  • FRAI - AI compliance checks with Git pre-commit hooks.

CI/CD and Compliance-as-Code

  • GapSight GitHub Action - Run EU AI Act / NIST AI RMF / ISO 42001 compliance checks in CI/CD pipelines. Generates compliance artifacts alongside test results.
  • Systima Comply - Open-source EU AI Act scanner with AST-based detection of 37+ ML frameworks.

Enterprise Platforms

  • Vanta - Automated compliance platform, includes EU AI Act coverage. From ~$10K/year.
  • Credo AI - AI governance platform with policy packs and risk scoring.
  • Regulativ.ai - Supports 40+ compliance frameworks.
  • Aikido Security - Developer-first security and compliance, includes AI system checks.
  • Enzai - Pre-built EU AI Act policy packs and audit workflows.

Libraries and SDKs

  • Fairlearn - Python library for assessing and improving fairness in ML models.
  • AI Fairness 360 (IBM) - Toolkit to detect and mitigate bias in ML models.
  • Alibi Detect - Outlier, adversarial, and drift detection.
  • Evidently AI - ML model monitoring and evaluation.
  • Deepchecks - Testing and monitoring for ML models.
  • SHAP - Explainability for ML model outputs.
  • LIME - Local interpretable model-agnostic explanations.

Datasets and Benchmarks


Learning Resources

Official Guidance

Articles and Guides

Papers


Newsletters and Communities


Conferences and Events


Contributing

Contributions welcome. Please read CONTRIBUTING.md before submitting a PR.

To add a resource: fork the repo, add your entry in the correct section, and open a pull request. Criteria: publicly accessible, actively maintained, genuinely useful for AI compliance practitioners.

About

A curated list of tools, frameworks, and resources for AI compliance - EU AI Act, NIST AI RMF, ISO/IEC 42001

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors