AI doesn’t create the gaps in your organisation.
It finds them, and it amplifies whatever it finds.
The gap between what AI promised and what it’s delivered almost never lives in the rollout. It lives in the foundations underneath. Two diagnostic tools to help you see the whole picture clearly, before you invest further in the wrong place.
5
5
2
ADOPTION POSITIONS IN EVERY ORGANISATION
FOUNDATIONAL LAYERS YOURS TOOLS DIAGNOSE
FREE DIAGNOSTIC TOOLS BELOW
The question most organisations ask when AI doesn’t deliver is the wrong one.
They look at the rollout, the tools, the training, and when none of those explain the gap between what was promised and what’s actually changed, they keep looking in the same place, because that’s where the investment went, and that’s where the pressure to show results is highest.
But the gap almost never lives there. It lives in the foundations. In whether the experience you’re trying to create for your customers has ever been clearly named. In whether your operating model gives people enough clarity to actually deliver it. In whether your strategy translates into what happens in a team meeting on a Tuesday morning, or stays in the deck.
AI doesn’t create these gaps. It finds them. And it amplifies whatever it finds.
The two tools on this page are designed to help you see that picture honestly, the whole picture, not just the adoption layer. Together they give you a clear view of where the real work is, before you invest further in the wrong place.
WHY THIS MATTERS
THE FIVE LAYERS - THREE6 FRAMEWORK
Experience
What are we creating for the people we serve? Where does AI belong?
1
Strategy translation
Does the experience connect clearly to purpose and strategic direction?
2
Operating model
Governance, decision rights, roles giving people clarity to deliver
3
Process + capability
Documented, clear, people equipped to follow them alongside AI
4
Adoption
Designed for where people actually are, not where we wish they were
5
From the experience you’re designing to the people who’ll deliver it
Start with the foundations. Then map your people.
DIAGNOSTIC TOOLS
Use them in order. Tool 1 works through the organisational foundations; experience, strategy, operating model, process, adoption design. Tool 2 maps where your people are actually sitting right now and what each of them needs. Together they give you the complete picture before you design anything.
Five questions, one per layer of the Three6 framework; that surface whether your foundations are creating the conditions for AI to work. Works through experience design, strategy translation, operating model clarity, process documentation, and adoption design. Your score shows not just whether you’re ready, but where the friction actually lives.
Experience · Strategy · Operating model · Process · Adoption
01
Maps the five positions people hold when AI arrives in an organisation; the Truster, the Resister, the Oblivious, the Performer, and the Challenger. Before you design any adoption program, you need to know where your people are actually starting from, not where you wish they were.
Use this after Tool 1 to design an adoption approach that meets each position where it is.
①Truster ② Resister ③ Oblivious ④ Performer ⑤ Challenger
02
Your foundations determine which positions emerge.
THE FIVE POSITIONS
When the foundations underneath AI adoption are unclear, when operating model is fuzzy, strategy doesn’t translate, processes live in people’s heads, these positions aren’t personality types. They’re structural responses. Fix the foundations and the positions shift. Ignore them and the Performer and Resister multiply.
①
The Truster
Accepts without questioning
What they need:
Critical thinking frameworks. Practice interrogating AI output. Understanding where it goes wrong.
②
The Resister
Disengages rather than engages
What they need:
Safety and transparency. Space to experiment without consequence. Peer demonstrations, not top-down training.
③
The Oblivious
Not yet in the conversation
What they need:
A personal connection. Not the strategic case, the daily work case. What would be easier for them specifically?
④
The Performer
Looks adopted. Isn’t.
What they need:
Psychological safety. A conversation where it’s safe to say “I haven’t actually been using it” without consequence.
⑤
The Challenger
Where everyone needs to get to
What they need:
Permission to go further. A role in helping others. The latitude to experiment properly.
Go deeper on the thinking.
FURTHER READING
Two blogs that expand on what the tools surface, why AI adoption keeps failing even when organisations are genuinely trying, and the experience design question that most skip entirely but that determines whether everything else works.
BLOG · AI ADOPTION · CHANGE MANAGEMENT
Your AI rollout isn’t failing because of the technology.
The gap almost never lives in the rollout. Five genuinely different positions, a hospital story that captures the practical answer, and why designing for the Challengers while hoping everyone else follows is the mistake almost every organisation makes.
BLOG · SERVICE DESIGN · OPERATING MODEL
The experience question organisations keep skipping.
The question that almost never gets asked early enough — and that determines whether everything downstream works. What experience are you actually trying to create? For your customers, for your people. And where does AI genuinely serve that, and where does it get in the way?
If you can see the gap - let’s talk about where to start
CONTINUE THE CONVERSATION
The tools surface the picture. The conversation is where you work out what to do about it. A genuine conversation about what the diagnosis is showing you and where the real work is.
Is your organisation ready to make AI work?
AI amplifies whatever foundation you already have. This assessment works through the five layers that determine whether AI adoption succeeds, from the experience you're designing to the people who'll deliver it. One question per layer. Your score shows not just whether you're ready, but which positions your organisation is currently creating.
Rate each question from 1 (not at all) to 5 (clearly and consistently). Each question maps to one of the five layers, and shows which of the five adoption positions that layer is creating or blocking right now. Work through the layers in order: each one builds on the one below it.
Do we know what experience we're designing for, and where AI genuinely serves it vs where human connection matters more?
This is the question most organisations skip entirely. Before designing any AI adoption program, you need clarity on what great looks like for your customers and your people, and which moments AI can enhance versus which moments it would undermine.
AI decisions are driven by efficiency targets rather than experience design. Nobody can clearly articulate what great looks like for the people you serve. This leaves the Oblivious with no personal reason to engage, and blocks Challengers from emerging, because there's nothing meaningful to challenge toward.
Leaders can describe the end-to-end customer experience and name specifically where AI enhances it. There's a shared understanding of which moments require human judgment, empathy, or presence, and AI isn't deployed there.
Can your people draw a clear line from the organisation's purpose and strategy to what they do every day, and to how AI fits into it?
Strategy that lives in a document and doesn't translate into daily practice creates a fundamental disconnection. People can't adopt AI meaningfully if they can't see how it connects to what they're actually trying to achieve.
When strategy doesn't translate, people perform adoption (Performer) because they can't see the point, or disengage entirely (Oblivious) because AI feels irrelevant to their actual work. The translation gap is where most AI investments quietly fail.
Frontline staff can describe how their daily work connects to organisational goals. When AI tools are introduced, people can see how they're meant to support what the organisation is trying to do, not just what they're supposed to do faster.
Are roles, decision rights, and governance clear enough that people know what they own, and what they're empowered to decide with or without AI?
When accountability is blurred, AI accelerates activity without improving outcomes. When decision rights are unclear, people default to two extremes: trusting AI blindly to avoid responsibility, or resisting it entirely to protect their autonomy.
Unclear operating models push people to the Truster and Resister positions, not because of personality, but because of structure. These are operating model problems wearing people-problem clothes. Fix the model first.
People know what they can decide without escalating. AI output informs decisions rather than replacing human judgment. There's confidence in acting on what the technology produces, and clarity about when not to.
Are your processes documented and do your people have what they need to follow them, including the skills and confidence to work alongside AI?
AI can support consistent delivery, but only when the process it's supporting is defined. Undocumented processes don't get improved by AI; they get accelerated into inconsistency. Capability gaps don't disappear when you introduce a tool either; they become visible.
When processes are unclear, people resist tools that expose the gaps (Resister), tune out entirely (Oblivious), or go through the motions without real change (Performer). Document the work before you try to improve it with technology.
New team members can follow documented processes. Consistency is the norm. People have had enough exposure to AI tools that they know what they're useful for and where to be sceptical. There's something clear for AI to support and accelerate.
Have you designed your adoption approach for where your people actually are, or for where you wish they were?
Most AI adoption programs are designed for the Challengers and hope everyone else will follow. They don't. The Truster, Resister, Oblivious, Performer, and Challenger each need something meaningfully different, and a single training event can only reach one of them well.
A single adoption program creates Performers (who comply visibly but don't change) and leaves Resisters untouched. Low psychological safety is the primary factory of both. This is a design problem, not a people problem. It's the last layer to solve, not the first.
Your adoption design has different pathways for different starting positions. Engagement is continuous, bite-sized, embedded in existing rhythms, not a one-off training event. Psychological safety exists to say "I haven't been using it" without consequence. Challengers have room to go further.
Get your tailored AI Readiness report
Based on your answers, we'll send a personalised report focused on your weakest layer, plus the companion "Where are your people sitting?" tool. Straight to your inbox.
Your report is on its way
Check your inbox in the next few minutes. If it doesn't arrive, peek in your promotions or spam folder, and add nina@three6.com.au to your contacts so we land in the right place next time.
You'll also receive "Where are your people sitting?" with your emailed report. This tool shows you what your five layers are creating. The positions tool shows you where your people are starting from. Together they give you the complete picture before you design your adoption approach.