04
AI Maturity

AIMA™

Ayati AI Maturity Assessment — from AI adoption to decision sovereignty.

Measure AI maturity as decision-system readiness — not tool adoption. Most organisations can build dashboards and models. Far fewer can produce repeatable, governed, audit-ready decisions. AIMA™ evaluates five pillars: intent, determinism, intelligence, governance, and last-mile integration.

Deterministic Governance-first Board-ready output Offline Mode-B tool
Get the Toolkit → See How It Works
50
Assessment questions
5
Dimensions evaluated
90d
Roadmap output
L1–L5
Maturity ladder
The Five Dimensions

What AIMA™ evaluates.

01
SIA — Strategic Intent Alignment
Intent Charter, KPI ownership, scope definition, and board alignment. Evaluates whether AI initiatives are anchored to decision outcomes — not tool deployment.
Weight: 20%Intent Charter
02
DDI — Data Determinism & Infrastructure
Versioning, lineage tracking, QC pipelines, and reproducibility. The anchor dimension for the Risk Exposure Index (REI™) — where audits and trust fail first.
Weight: 25%REI™ anchor
03
IMC — Intelligence & Modelling Capability
Validation rigour, drift detection, explainability, and model registry. Triggers the Drift Vulnerability Index (DVI™) when intelligence outpaces governance.
Weight: 20%DVI™ trigger
04
GRC — Governance & Risk Containment
Definition registry, risk tiers, audit trail, and change control. Anchors REI™ alongside DDI. The accountability backbone of the entire assessment.
Weight: 20%GRC backbone
05
DLI — Decision Loop Integration
Action workflows, SLAs, feedback loops, and process integration. Measures whether intelligence actually reaches operational decisions. Anchors the DFI™.
Weight: 15%Last-mile
Why AIMA™ Exists

The gap between "insight" and "decision" is where most AI programs fail.

Common failure pattern
KPI drift

Definitions evolve informally

Dashboards decay in 12–18 months. Metrics that once meant one thing quietly start answering different questions.

Trust collapse

No registries or owners

Models ship without governance. Drift goes undetected until a stakeholder challenges a number in a meeting.

Model fragility

No action loop

"Analytics theatre" replaces operational learning. Insights are produced. Nothing changes downstream.

Audit friction

Insight without receipts

Reviewers can't follow the decision chain. Trust breaks at the review table — not at the model.

AIMA™ reframes maturity

AI maturity is not "more models." It is less ambiguity in definitions, ownership, and decision loops.

01
Decision charters

Define what AI can decide — and cannot. Removes scope ambiguity.

02
Deterministic data pipelines

Same inputs → same outputs, always.

03
Governance receipts

Inputs → rules → outputs → outcomes. Every decision traceable.

04
Closed-loop integration

Output → action → outcome → learning.

The AIMA™ Model

Five dimensions · deterministic scoring · L1–L5 maturity ladder

SIA
Strategic Intent Alignment
Intent CharterKPI ownershipScope definitionBoard alignment
Weight: 20%
DDI
Data Determinism & Infrastructure
VersioningLineage trackingQC pipelinesReproducibility
Weight: 25% — anchors REI™
IMC
Intelligence & Modelling Capability
Validation rigorDrift detectionExplainabilityModel registry
Weight: 20% — triggers DVI™
GRC
Governance & Risk Containment
Definition registryRisk tiersAudit trailChange control
Weight: 20% — anchors REI™
DLI
Decision Loop Integration
Action workflowsSLAs assignedFeedback loopsPlaybooks
Weight: 15% — anchors DFI™

AIMA™ evaluates capability to produce decision receipts

SIA
Intent · KPIs · Ownership
What can AI decide?Who owns the output?
DDI
Determinism · Lineage · QC
Reproducible inputsTraceable pipeline
IMC
Intelligence · Drift · Explain
Model defensibilityExplainability layer
GRC
Registry · Risk tiers · Audit
Receipts per decisionAudit trail
DLI
Workflows · SLAs · Feedback
Insight → actionOutcome → learning

inputs → assumptions → rules / models → outputs → actions → outcomes

Risk & Fragility Indices

Quantify systemic exposure and common failure patterns

REI™ — Risk Exposure Index

Estimate fragility when foundations are weak.

Anchored on DDI and GRC because that's where audits and trust fail first. High REI signals that reproducibility and governance are insufficient for safe scaling.

Formula
REI = 100 − (DDI + GRC) / 2
High REI → reproducibility + governance insufficient for safe scaling.
High REI → escalate DDI & GRC remediation first
DVI™ — Drift Vulnerability Index

Flags drift risk when intelligence outpaces governance.

When model capability is high but governance is weak, definition drift and model drift compound silently.

Trigger condition
IMC > 70 AND GRC < 50 → drift risk
DFI™ — Decision Friction Index

Measures last-mile resistance — the insight-to-action gap.

When DLI is low, analytics produces insight that never reaches decision-ready action. The last mile remains broken.

Formula
DFI = 100 − DLI
High DFI → action infrastructure missing despite strong modelling.
How It Works

50 questions · 0–4 scale · deterministic scoring · roadmap generator

01
Answer 50 questions
5 dimensions, 0–4 scale, 30–45 minutes. Fully deterministic scoring — no AI model involved at any stage.
02
Receive AIMA™ Score
0–100 composite with L1–L5 level mapping, REI™ Risk Exposure, DVI™ Drift Vulnerability, and DFI™ Decision Friction.
03
Review Dimension Gaps
Specific gap identification — where maturity is lowest, what it costs, what to fix first. Priority = REI × (100 − DimensionScore).
04
Get 90-Day Roadmap
Actions sequenced by REI × gap. Board-ready outputs with specific initiatives, owners, and success criteria.
AIMM-DX™ — Scoring Scale
ScoreLevelMeaning
0AbsentNo evidence of this capability
1Ad-hocHappens informally, person-dependent
2DefinedDocumented, not consistently followed
3StructuredSystematic, backed by receipts
4OptimisedAutomated, continuously improved
Scoring mechanics
Each dimension: 10 questions, max raw score 40
Dimension score: (Sum / 40) × 100
Overall: weighted sum (SIA 20, DDI 25, IMC 20, GRC 20, DLI 15)
Score ≥ 3 requires evidence: registry, approvals, audit logs
Priority = REI × (100 − DimensionScore)
Roadmap Engine — 90-day output
<30
Foundation

Establish ownership

Ownership, registry, deterministic pipelines. Fix what must exist before anything else can work.

30–60
Structure

Build control

Monitoring, enforcement, change control. Make the system self-correcting before scaling.

60–80
Integration

Close the loop

Automation, workflow embedding, reduced friction. Connect insight to action reliably.

>80
Sovereignty

Scale with confidence

Scenario intelligence, orchestration, governance-as-code. Decision systems that learn.

Who Should Use AIMA™

Built for the people accountable for AI outcomes.

Executives & Boards

Investment clarity and risk visibility

Understand where maturity is blocking value creation
See audit and drift exposure before reviewers do
Get a 90-day plan to operationalise AI safely
Board-ready output — not technical jargon
Analytics & AI Leadership

Shared language and closed loops

Shared language across data, ML, ops, and compliance
Governance standards and definition registries
Closed-loop practices: feedback → learning → improvement
Sector-adjusted weights for regulated environments
Case Studies

AIMA™ in practice — before & after across five sectors.

Context → Symptoms → AIMA™ snapshot → Indices → Root causes → 90-day plan → Outcomes.

DVI Triggered

Large Private Bank — Model Factory

Tier-1 private bank (lending, cards, wealth). 120+ ML models in production. Strong data science team; fragmented governance.

Pre-AIMA™ symptoms
  • Frequent drift complaints; silent score performance degradation
  • Audit queries increasing; evidence packs inconsistent across teams
  • Same KPI (NPA, roll rate) reported differently by different teams
  • Risk committee distrust: "Show me the receipt."
Root cause pattern
  • No unified model registry with owner, version, approval, retirement
  • Feature changes without version control; lineage incomplete
  • Monitoring existed, but enforcement was optional — no gating
  • Audit evidence produced manually; high variance across teams
90-day plan
0–30Registry baseline + KPI definition charter + ownership mapping
30–60Drift thresholds + risk tiering + approval workflows
60–90Audit receipts + release gates + risk committee cadence
Outcomes
  • Audit queries reduced via standardised evidence packs
  • Approval cycle time cut via workflow gating
  • Budget shifted from "more models" to "decision automation"
AIMA™ Snapshot
DimScoreSignal
SIA68Strategy exists; charters uneven
DDI42Weak version discipline
IMC82Strong modelling velocity
GRC38Registry, audit gaps
DLI55Partial integration
REI™ 60High
DVI™ ONDrift risk
DFI™ 45Moderate
Post-implementation
DDI4271
GRC3873
Overall5774
REI™6028
High DFI — Workflow Gap

Healthcare Network — 12-Hospital Chain

12-hospital private chain with 35+ dashboards. AI pilot: readmission risk + LOS optimisation. Low clinician trust; ambiguous ownership.

Pre-AIMA™ symptoms
  • Doctors distrust alerts due to medico-legal ambiguity
  • Metrics inconsistent across Quality, Finance, and Clinical Ops
  • Predictions produced but no standard playbook or SLA
  • AI seen as "extra screen" instead of workflow support
Root cause pattern
  • No clinical decision charter: who acts, mandated vs advisory
  • Alert surfaces not integrated into HIS/EMR systems
  • No standardised playbooks or escalation paths
  • Governance gaps for privacy/consent and medico-legal framing
90-day plan
0–30Unify definitions (readmission/LOS); CMO-led model review board
30–60Integrate alerts into HIS; playbooks + SLAs; escalation paths
60–90Audit receipts; outcome tracking; clinician feedback loop
Outcomes
  • Higher intervention compliance after HIS embedding
  • Reduced medico-legal ambiguity through ownership + receipts
  • AI moved from "pilot curiosity" to "clinical operations asset"
AIMA™ Snapshot
DimScoreSignal
SIA51Goals stated; ownership unclear
DDI46Data exists; lineage partial
IMC58Pilots and prototypes
GRC33Governance posture weak
DLI28Weak embedding into care pathways
REI™ 60.5High
DFI™ 72Severe
DVI™ OFFNot triggered
Post-implementation
DLI2866
GRC3362
Overall4364
DFI™7234
Low DDI — High REI

Manufacturing Conglomerate — 18 Factories

18 factories with IoT sensors and predictive maintenance models. Multiple vendors; inconsistent standards across plants.

Pre-AIMA™ symptoms
  • Models not reproducible across plants — "works here, fails there"
  • Calibration differences; timestamp drift; inconsistent missing-value rules
  • Maintenance teams ignored alerts due to false positives
Root cause pattern
  • Non-standard sensor schemas and unit mismatches across plants
  • Data quality rules varied by vendor and site
  • Training datasets not versioned; features changed without traceability
  • No confidence bands or evidence surface for maintenance teams
90-day plan
0–30Sensor data contract; baseline QA; calibrate critical sensors
30–60Deterministic ETL + lineage; comparability checks across plants
60–90Versioned datasets; alert confidence bands; feedback loop
Outcomes
  • Unplanned downtime reduced after determinism uplift
  • Trust improved with calibrated thresholds + explainability
  • Scaled rollout without plant-specific rework
AIMA™ Snapshot
DimScoreSignal
SIA72Clear business case
DDI29Severe reproducibility issues
IMC61Modelling hindered by data
GRC40Controls exist; not enforced
DLI48Partial; no confidence layer
REI™ 65.5High
DFI™ 52Moderate
DVI™ OFFNot triggered
Post-implementation
DDI2976
Overall5069
REI™6622
DVI Triggered — Governance Lag

Digital Retail Platform — E-commerce & Pricing

E-commerce platform with personalisation and dynamic pricing. Real-time experimentation; weekly model and feature updates.

Pre-AIMA™ symptoms
  • Pricing inconsistencies across channels; customer trust impact
  • Complaints and regulator attention around fairness
  • Shadow changes deployed without clear approvals
Root cause pattern
  • No governance-as-code for high-risk pricing changes
  • Explainability and fairness constraints not standardised
  • Experimentation velocity created policy drift over time
  • Limited audit trail tying changes to approvals and outcomes
90-day plan
0–30Pricing governance charter; risk tiers; must-approve changes
30–60Release gating; fairness constraints; explainability layer
60–90Receipts + monitoring; align governance with growth cadence
Outcomes
  • Lower complaint volume via standardised fairness controls
  • Improved pricing trust; fewer channel inconsistency incidents
  • Experimentation continued — now with a governance safety rail
AIMA™ Snapshot
DimScoreSignal
SIA77Strong growth alignment
DDI52Some determinism; uneven
IMC88Very strong experimentation
GRC31Governance lagging significantly
DLI74Deep product integration
REI™ 58.5Elevated
DVI™ ONDrift risk
DFI™ 26Low
Post-implementation
GRC3164
Overall6474
REI™5942
DVI™ONOFF
Low REI · Low IMC

Public Sector Department — Compliance-First

Strong documentation and process controls; compliance-first posture. Analytics limited to descriptive reporting; minimal modelling capability.

Pre-AIMA™ symptoms
  • Slow decision cycles; manual review dominates every step
  • Teams avoided ML due to perceived risk and capability gaps
  • Strong governance present, but intelligence layer underdeveloped
Root cause pattern
  • No safe sandbox to build IMC under existing strong controls
  • Training gaps: validation, monitoring, interpretation skills
  • Limited delivery capacity to embed models into workflow systems
90-day plan
0–30Controlled AI pilot lab; choose low-risk use cases first
30–60Validation playbooks; training programme; registry basics
60–90Expand to moderate complexity; embed with receipts
Outcomes
  • Improved decision speed without sacrificing audit posture
  • Repeatable safe pathway to introduce AI capability
  • Narrative shift: "AI is governable."
AIMA™ Snapshot
DimScoreSignal
SIA74Clear mandate and objectives
DDI69Strong data controls
IMC24Low modelling capability
GRC82Strong governance posture
DLI44Partial last-mile capability
REI™ 24.5Low
DFI™ 56Moderate
DVI™ OFFNot triggered
Post-implementation
IMC2458
Overall5866
Cycle timeSlow−32%
FAQ

Short answers for fast stakeholder alignment.

Do we need ML to score well?
No. AIMA™ rewards decision reliability. Strong governance + determinism + integration can outperform model-heavy programs that are fragile. A well-governed rules-based system can outscore a complex ML deployment with poor traceability.
Why are DDI and GRC so central?
Because that's where most high-cost failures occur: semantic drift, audit gaps, silent changes, and trust collapse. DDI and GRC anchor both REI™ and DVI™ — they are the failure modes that cause the most damage when weak.
Can we adjust weights by sector?
Yes. Regulated domains (healthcare, BFSI, manufacturing) often increase DDI and GRC weights. The offline assessment tool enforces that weights must total 100%. Sector-calibrated variants are available through Ayati engagement.
Is the assessment "AI-powered"?
No. Scoring and narrative are deterministic by design. This prevents opinion drift and supports auditability. An AI-generated maturity score would undermine the very principle the framework is trying to measure.
How long does the assessment take?
30–45 minutes for the 50-question diagnostic using the offline tool. The full executive workshop engagement (evidence-backed scoring and registry build) typically runs across 2–3 working sessions.
Get the AIMA™ Toolkit

Whitepaper + offline assessment engine.

Download the whitepaper or open the offline tool — no data leaves your environment. Audit-ready by design.

Download Whitepaper → Open Offline Tool
Want an enterprise rollout?
Executive workshop + evidence-backed scoring
Registry build: definitions, model inventory, approvals
Decision-loop playbooks + SLAs + audit receipts
Sector variants: healthcare, BFSI, manufacturing
Contact Ayati →