Skip to content

INK-HOME/.github

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

.INK-HOME

Authority for AI that touches the real world.

AI can propose.
dot.ink decides whether it gets to act.

Capability gets headlines.
Authority decides who compounds advantage and who cleans up the wreckage.

If AI can move money, change records, alter permissions, trigger workflows, touch customer accounts, or operate against live infrastructure, output quality is no longer the final question.

Execution authority is.

Most of the market is still racing to make AI more capable.

The companies that win this decade will be the ones that made AI governable first.

What we build

.INK-HOME is the organization behind dot.ink — the execution authority layer for high-stakes AI.

dot.ink compiles what must be true, enforces what is allowed to happen, and produces proof that the system stayed inside the approved corridor.

Deterministic before action. Provable after action.

That is the line between AI that looks impressive and AI a serious company can actually deploy.

Why this matters

The next major AI failures will not look like chatbot mistakes.

They will look like business failures:

  • a payment released that should not have moved
  • a record changed on stale or contradictory state
  • a permission granted that should have been blocked
  • a workflow triggered on false certainty
  • an external action executed that nobody can later justify cleanly

And when that happens, nobody inside the company will care that the model sounded smart.

They will care about the language that survives in finance, legal, compliance, security, and the boardroom:

financial loss
audit exposure
control failure
legal heat
executive blame
trust erosion

Without an execution boundary, AI scale does not just increase upside.

It multiplies exposure.

Our point of view

Monitoring watches.
Policy advises.
Guardrails nudge.
dot.ink governs execution.

We do not believe the real AI bottleneck is raw capability.

We believe the bottleneck is whether an organization can let AI act without surrendering control of money, records, permissions, operations, and proof.

That is why we are not building another dashboard, wrapper, or post-hoc explanation layer.

We are building authority infrastructure.

Who this is for

This is for operators dealing with high-liability AI action, especially where the real blocker is trust at the execution boundary:

  • CFOs
  • compliance and risk leaders
  • technical diligence leads
  • executive owners of high-stakes workflows

If you are letting AI touch payments, refunds, records, permissions, workflows, or external systems without an explicit execution boundary, you do not have AI governance.

You have a system that works right up until the day it matters.

Start here

Public proof surface

  • INK_AI — public doctrine, constitutional schemas, representative proof bundles, and offline verification

What the public surface is for

The current public surface is intentionally narrow.

It exists to expose the proof floor:

  • doctrine
  • control logic
  • representative bundles
  • offline verification
  • category framing

If the claim is real, you should be able to inspect it.
If the proof story is real, you should be able to run it.
If the boundary is real, it should survive skeptical technical scrutiny.

Category claim

When AI crosses into consequential action, authority becomes infrastructure.

That is the layer dot.ink is built to own.# .github Organization configuration and profile

About

Organization configuration and profile

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors