AI maturity is not a technology problem. It's a culture problem you can measure.

images/header.png
Back

29 avril 2026

Why HR leaders are now on the front line of AI deployment — and how to make visible what technical dashboards miss.

Why HR leaders are now on the front line of AI deployment — and how to make visible what technical dashboards miss.

CIOs have rolled out the tools. Boards have approved the budgets. And yet, in most organizations, AI use remains a scattered set of individual habits — opaque, often non-compliant, rarely connected to enterprise value. The question is no longer whether AI is available. It’s whether the organization is culturally ready to absorb it.

That is precisely what the HR function is now best positioned to measure.

AI maturity, in one sentence

AI maturity is an organization’s actual ability to adopt, scale, and secure the use of AI in everyday work — not the volume of its announcements, nor the number of licenses deployed. It rests on three layers: human buy-in, capability and governance, and real usage with measurable impact. An organization can be advanced on one layer and failing on the others. That imbalance is what derails AI initiatives — not a lack of tools.

“Organizations still confuse AI use with AI maturity. Having employees who use ChatGPT is not proof of a ready organization. Sometimes it’s the opposite signal: an unmanaged risk exposure.” — Yohan Ruso, CEO, Praditus

The gap between adoption and value: what the data shows

AI adoption is moving fast. The ability to extract value from it, much less so. The gap is now extensively documented.

Indicator Value Source
Organizations using AI in at least one business function 88% McKinsey, State of AI 2025
Organizations that have begun scaling AI enterprise-wide ~1 in 3 McKinsey, State of AI 2025
Organizations reporting any measurable EBIT impact from AI 39% McKinsey, State of AI 2025
Organizations that have fundamentally redesigned workflows 21% McKinsey, State of AI 2025
“High performers” capturing significant value from AI 6% McKinsey, State of AI 2025

Source: McKinsey, The State of AI in 2025: Agents, Innovation, and Transformation, November 2025

The verdict is unambiguous: adoption has become universal, maturity remains exceptional. Almost every organization uses AI. Only one in three deploys it at scale. Only one in five redesigns its processes around it.

The hidden cost: when ungoverned use becomes a risk debt

The second gap is more concerning: the distance between what organizations think they govern and what is actually happening in daily work.

Indicator Value Source
Employees using AI tools not approved by their employer 78% WalkMe (SAP), AI in the Workplace Survey 2025
Employees who have shared sensitive data with AI tools without permission 38% IBM Think, What Is Shadow AI
Employees who say AI usage guidelines are unclear ~50% Programs.com, Shadow AI Statistics
Average extra cost of a data breach involving shadow AI +$670,000 IBM, Cost of a Data Breach Report (cited in Programs.com)

Sources: WalkMe / SAP, August 2025; IBM, What Is Shadow AI

Three numbers capture the HR challenge: eight in ten employees use unapproved tools, nearly four in ten have shared sensitive data, and half admit the rules are unclear. This is not a technical problem. It’s a problem of culture, training, and framing.

“An AI policy that no one understands is worth less than a simple policy that everyone has internalized. The organizations that reduce shadow AI exposure are not the ones that ban the most — they’re the ones that train the best.” — Yohan Ruso, CEO, Praditus

Why HR leaders are at the center of this

AI does not just transform an information system: it transforms a system of work. Roles shift toward supervision, validation, judgment. Skill expectations change. Latitude of use becomes a managerial issue. And compliance — personal data, IP, bias — depends on human behavior, not on written rules.

Three patterns appear consistently in organizations struggling to scale:

None of these is a technical problem. All sit squarely within HR’s remit.

The three confusions that distort how organizations read their AI maturity

Before any measurement exercise, the most common analytical biases need to be named. Most organizations interpret their AI maturity through a single lens — and reach the wrong conclusions.

Confusing usage with maturity. The fact that your employees use ChatGPT does not mean your organization has mastered AI. Without validation frameworks, critical reflexes on AI outputs, and awareness of approved tools, usage is a weak signal — sometimes a risk factor.

Confusing governance with adoption. An AI charter, an ethics committee, and a privacy policy do not produce maturity unless they are understood and held in daily practice. The distance between the written rule and the actual behavior is the most expensive blind spot.

Confusing strategy with execution. A board-level AI vision is not an adoption trajectory. Without allocated resources, usable data, and managerial accompaniment, the strategy stays declarative.

Reading AI maturity therefore requires a model that combines all three layers. None of them is sufficient on its own.

A 7-pillar framework to measure AI maturity

The AI Culture Readiness Index developed by Praditus rests on seven pillars, designed to cover the three layers without overlap. Each pillar answers an operational question that HR leaders are — or should be — asking.

Pillar 1 — Perception of AI Benefits (buy-in, ownership, empowerment)

Measures the existence of fertile ground for diffusion: presence of informal AI champions, managerial enthusiasm, latitude of use, sense of control. Without this foundation, AI is experienced as a top-down program, with risks of rejection or workaround. What it reveals for HR: organic diffusion capacity, and zones where accompaniment must be reinforced.

Pillar 2 — AI Training (capability, upskilling, safe use)

Assesses whether training simultaneously covers usage effectiveness and security (data, confidentiality, best practices). Training that focuses only on productivity mechanically increases the risk surface. What it reveals: whether the training setup is credible, prioritized, and actually accessible (dedicated time, not just available content).

Pillar 3 — AI Experimentation Culture (learning, innovation, data quality)

Measures the right to test, fail, and share — and crucially, the quality of available data. A frequently underestimated point: the real value of AI is capped by the accessibility and reliability of internal sources. The main bottleneck is not always cultural; sometimes it’s structural.

Pillar 4 — AI Governance (compliance, risk control, decision rules)

Covers data protection, IP, understanding of AI limitations (bias, hallucinations), the ethical framework, and — critically — the reality of shadow AI. This is the pillar that measures the gap between rule and practice. What it reveals: the organization’s actual risk surface and the priority actions to take (charter, privacy/IP training paths, guardrails, controls).

Pillar 5 — AI Leadership & Vision (strategic alignment and execution capacity)

Evaluates resource availability, strategic clarity, and most importantly the visible commitment of senior leadership. Executives who advocate for transformation without engaging in it personally send a counterproductive signal that teams pick up on immediately.

“You cannot ask an organization to transform what its leaders are not willing to embody themselves. This is probably the single most predictive factor of an AI deployment that takes off — or stalls.” — Yohan Ruso, CEO, Praditus

Pillar 6 — AI Change Management (acceptability, role transformation, accompaniment)

Measures the quality of transition support, communication on role impact, recognition and handling of resistance, identification of skill shifts. This is the pillar that best predicts deployment friction.

Pillar 7 — Using AI (usage level, critical thinking, impact, trust)

Verifies maturity in observable practice: usage frequency, sophistication, ability to validate outputs, measured productivity impact, trust in tools. This is the final test. An organization can score well across the first six pillars and reveal blind spots here. It is also the pillar that prevents conclusions about maturity based on intentions or governance alone.

Reading the results: four critical patterns to know

The point of the model is not in any single pillar — it’s in the cross-readings. The table below summarizes the four combinations most frequently encountered in the field, and the actions they call for.

Observed combination Diagnosis Dominant risk HR priority action
P1 strong + P2 weak Unsecured enthusiasm Shadow AI, data leakage Priority training plan integrating privacy and IP
P5 strong + P6 weak Clear vision, blocked execution Disengagement, managerial cynicism Change management, visible field-level leadership
P4 strong + P7 weak Written rules, missing usage Invisible ROI, disinvestment Adoption support, value demonstration
P3 weak in isolation Structural data bottleneck AI value mechanically capped Data remediation before any human investment

This grid avoids the classic trap of the aggregate score, which collapses opposing realities and hides actual blockers. An organization can show a reassuring average score while quietly carrying two high-risk patterns.

What this kind of measurement changes for HR leaders

Measuring AI maturity through a cultural framework, rather than through technical KPIs, changes three things in HR practice:

It turns a vague topic into an action plan. Recommendations stop being generic. They become targeted by pillar, by population, by site.

It gives HR a shared language with IT and the executive team. HR no longer arrives with intuitions, but with comparable data, in a structured framework.

It enables tracking trajectory over time. AI maturity is not a state, it’s a movement. An index repeated over 12 or 24 months makes visible what one-off surveys miss.

The next useful move

If a single action were to be taken, it would be this: measure, on a pilot scope, the actual AI maturity of the organization with a structured model. Not a satisfaction survey. Not a technical audit. A cultural, comparable, actionable measurement.

The AI Culture Readiness Index by Praditus is built for this use case — fast deployment on a target perimeter, pillar-level reporting, recommendations differentiated by population. Asking for a demonstration on a specific case is generally the most efficient way to assess whether the framework fits your context.


FAQ — AI maturity in organizations

What is AI maturity in an organization?

AI maturity is an organization’s actual ability to adopt, scale, and secure the use of artificial intelligence in everyday work. It is not measured by the number of tools deployed or the volume of usage, but by the alignment of three layers: human buy-in (employee ownership), capability and framework (skills, governance, leadership), and effective usage with measurable impact. According to McKinsey’s State of AI 2025, 88% of organizations use AI but only 6% capture significant value from it — the gap lies precisely in maturity, not in technology.

How do you actually measure the AI maturity of a company?

AI maturity is measured using a structured index that evaluates several complementary dimensions. The AI Culture Readiness Index by Praditus uses a 7-pillar framework: perception of AI benefits, AI training, experimentation culture, governance, leadership and vision, change management, and real usage. Each pillar is measured through a psychometric questionnaire deployed across employees, then reported by population and site. Reading is always cross-pillar, never an aggregate score, in order to avoid false positives — for example, an organization where usage is high but governance is weak.

What is the difference between AI adoption and AI maturity?

AI adoption measures the percentage of employees who use at least one AI tool. AI maturity measures the organization’s ability to extract value from AI in a safe, compliant, and sustainable way. The distinction is critical: WalkMe’s 2025 survey indicates that 78% of employees use AI tools not approved by their employer, and 38% have shared sensitive data without authorization (IBM source). An organization can therefore display massive adoption while showing low maturity — this is exactly the configuration that produces shadow AI and exposes the company to data leakage incidents.