I'm Ankit — a GRC and Information Security professional based in Dubai, UAE. I've spent the last decade helping organisations figure out what their real risks are (not just what looks good on a dashboard), and building systems to actually do something about them.
Right now I'm the GRC Lead at PureHealth Group — the UAE's largest healthcare platform. That means overseeing compliance across 100+ hospitals and 4,000+ controls, which sounds impressive until you're the one responsible for making it actually work. The thing I'm most proud of there is bringing KPI breach rates down from 40% to 3%. That took a lot of persuasion, process redesign, and more than a few late nights.
Before PureHealth, I worked at Oman Arab Bank, Equifax, Deloitte, and PwC. Each was a different challenge — banking regulations, fintech audit, Big 4 consulting. What they had in common was that compliance was often treated as a box-ticking exercise. I've always pushed back on that.
Lately I've been going deep on AI Governance. ISO 42001 just dropped and the EU AI Act is becoming real — and most organisations have no idea how to actually implement either. That's the gap I'm trying to close, both in my day job and through some open-source work here on GitHub.
| Period | Role | Where | What I actually did |
|---|---|---|---|
| 2025–Present | GRC Lead | PureHealth Group, Abu Dhabi | Reduced KPI breaches from 40% → 3%; built compliance programme across 100+ SEHA facilities |
| 2024–2025 | IT Assurance & Compliance Manager | Oman Arab Bank, Muscat | Built AI-powered dashboards for audit tracking; resolved 60% of high-risk findings; reported to Board |
| 2024 | Information Security Manager | Equifax, India | Got to ISO 27001/42001/RBI readiness in 4 months; integrated Google Gemini for compliance workflows |
| 2022–2024 | Deputy Manager | Deloitte, India | Led SOC 2, ISO 27001, ISO 22301, NIST audits; managed a team of 4 senior consultants |
| 2021–2022 | Assistant Manager | PwC, India | Covered risk and controls across Oil & Gas, Fintech, and Retail |
A free tool for assessing AI risk without needing to read 500-page frameworks first. Four questions, about five minutes, plain-language results. It covers ISO 42001, the EU AI Act, NIST AI RMF, and a few others — and works whether you're a beginner trying to understand what's at risk, or an auditor who needs structured outputs.
Practical templates and checklists for implementing ISO 42001 — the new international standard for AI management systems. Gap assessments, risk registers, controls mapping, and a Python automation script that checks assessment currency across your AI inventory.
Same approach applied to the EU AI Act. Risk classification guide, conformity assessment checklist, FRIA template, technical documentation template, incident reporting procedure — all the stuff you actually need to operationalise the regulation, not just read about it.
I've been thinking about this for a while: most AI governance frameworks are written by lawyers and regulators for lawyers and regulators. The actual teams building and deploying AI systems — the engineers, the risk managers, the business owners — can't use them.
My view is that governance has to be engineered, not just documented. That's why I pair every policy template with something executable: a checklist you can actually run, a script that automates the monitoring, a decision tree that gives you an answer rather than more questions.
That's what GRC Engineering means to me. Policy meets code.
Frameworks: ISO 27001 · ISO 42001 · EU AI Act · NIST CSF 2.0 · NIST AI RMF · SOC 2 · PCI-DSS · GDPR · NCA ECC · DORA
Tools: ServiceNow GRC · Archer · Power BI · Python (for automation) · Excel (yes, still)
Certifications: CISA · CRISC · CISM (in progress) · ISO 27001 Lead Auditor
If you're working on AI governance, building a GRC programme, or just trying to figure out what the EU AI Act actually requires your organisation to do — feel free to reach out. I'm always happy to compare notes.
📧 ankituniyal619@gmail.com | 🌍 Dubai, UAE 🇦🇪 | LinkedIn

