Amit Mohod
Written by :
Amit Mohod
May 8, 2026
16 min read

Your Organisation Is AI-First on Paper. But Are Your People AI-Fluent in Practice?

Table of contents

Text Link

Picture a Monday morning leadership meeting. The deck on screen shows the company is AI-first. The licenses are paid for. The integrations are live. Every external signal says the transition is done.

Then someone asks: how many of our people are actually using these tools in their daily work and using them well?

The answer, in most organisations right now, is uncomfortable.

McKinsey's 2025 research found that 76% of employees report using AI at work, up from 30% in 2023. But reporting usage and being genuinely fluent in AI are not the same thing. The employee who opens an AI tool, types a vague question, gets a mediocre output, and closes the tab, they count in the adoption statistics. The employee who constructs a precise instruction, evaluates the output critically, catches the hallucinated fact, and iterates until the result is production-ready is doing something categorically different.

One is using AI. The other is fluent in it.

This gap, between access and fluency, between passive use and skilled application, affects every layer of the organisation. It shows up in hiring, where companies select candidates based on declared enthusiasm rather than demonstrated capability. It shows up in L&D, where training investments land on teams that cannot yet apply what they have learned. And it shows up in performance management, where high AI fluency goes unrewarded because no one has defined what it looks like.

The problem is not tool access. The problem is that most organisations have no reliable way to measure AI fluency, at the point of hire, within existing teams, or across the workforce as a whole.

The Three Ways Organisations Are Getting This Wrong

Whether the goal is screening new candidates or developing existing employees, most organisations default to one of three approaches, and all three are falling short.

Self-assessment surveys are fast and cheap, but self-reported skill levels correlate poorly with actual performance. Whether it is a candidate rating themselves before an interview or an employee completing an annual skills audit, you end up measuring confidence, not capability.

Declarative questions such as "tell me how you use AI" in an interview, or "describe your AI skills" in a development review, capture what people claim to have done, not what they can actually do. Without observable evidence, there is no way to separate real skill from a rehearsed answer or an optimistic self-perception.

Unstructured task assignments feel like progress but produce inconsistent results. Without a rubric, evaluators default to subjective judgment, and that is where unconscious bias enters. Research shows unstructured evaluation methods have half the predictive validity of structured, rubric-scored assessments. (Sackett et al., 2022)

What Good Evaluation Actually Looks Like

Effective AI fluency evaluation, whether for a new hire or a team of 500 existing employees, is built on one foundational principle: test demonstrated skill, not declared knowledge.

This is not a new idea. The best organisations already apply it to writing, coding, and analytical thinking through work samples and structured tasks, not conversations. AI fluency deserves the same rigour.

A well-designed evaluation covers three skill clusters:

Instruction Quality - can the person communicate clearly and precisely with an AI model? Clarity, structure, context.

Output Engineering - can they direct and shape what the AI produces? Format, constraints, examples.

Responsible AI Use - do they exercise ethical judgment? Avoiding bias, protecting data, defining fallback behaviour when the model gets it wrong.

The third cluster is the most under-discussed. An employee who writes weak prompts costs you time. An employee who cannot catch a biased or hallucinated output and acts on it anyway costs you far more, whether they are a new hire or a ten-year veteran.

Beyond what the evaluation measures, how it scores matters equally. Every dimension needs a pre-defined rubric, not because rubrics are bureaucratic, but because they are the only way to produce scores that are consistent, comparable, and defensible across candidates, teams, and geographies.

HR Owns This Problem End to End

This is not just a hiring problem or just an L&D problem. It is an organisational capability problem, and HR sits at the centre of it.

The JD that does not define AI fluency requirements is selecting for the past. The onboarding programme that does not assess AI fluency in week one is letting new joiners inherit old norms. The performance review that does not measure AI application is telling high performers that fluency is optional. The L&D budget spent on AI training without a baseline measurement has no way to prove ROI.

Measuring AI fluency, at the point of hire, at onboarding, at the annual review, and at every internal mobility decision, is the mechanism by which HR keeps the organisation's capability ahead of its ambition.

See how iMocha evaluates AI fluency across hiring and workforce development: request a demo

References: McKinsey & Company (2025); PwC Global AI Jobs Barometer (2025); Deloitte State of AI in the Enterprise (2026); LinkedIn Economic Graph Workforce Report (2025); Sackett et al. (2022), Journal of Applied Psychology.

Like the post? Share it!
Facebook logo symbol featuring a white lowercase 'f' inside a blue circle.LinkedIn icon in blue and purple gradient.

More from iMocha

Blurred business people interacting in a modern office with large windows and sunlight.
Internal Mobility
Why hire new candidates, when you can upskill/reskill existing employees for new job roles, teams, or locations within your organization. Read our blogs on Internal Mobility to know more!
Read more
White right arrow icon on a blue circular background.
Hand drawing an upward-trending line graph with a man walking on the graph line over large mechanical gears and flying birds in the background.
Employee Skill Gap Analysis
Is a 'Skills Gap' impeding your organization's progress? Explore our specialized blogs to discover best practices, current trends, and the latest market insights on proactively addressing and bridging skills gaps.
Read more
White right arrow icon on a blue circular background.
Businessman in suit interacting with a virtual network of connected people icons on a digital interface.
Strategic Workforce Planning
Align your talent with your business objective and develop future-ready workforce with our intuitive blogs on Strategic Workforce Planning.
Read more
White right arrow icon on a blue circular background.
Embark on your talent journey with us! Subscribe to our blogs now!
By clicking on the button below I agree to
iMocha's Terms of Service, Privacy Policy, and GDPR commitment.
Chatbot Widget

Get in touch