Raf Alencar
Running on DefaultAPR 29, 2026

The Process Owns the Requirement

Every failed AI implementation has the same upstream error: the wrong question got asked before any tool was selected.

The core argument

Every failed AI implementation has the same upstream error: the organization asked how to make its current process work with a new tool, instead of asking what the process actually needs to accomplish. The tool is always the last decision. The process owns the requirement. When you reverse that sequence, you don't just get bad AI adoption — you get your existing dysfunction running faster.

There's a question almost every organization asks before a technology deployment. It sounds reasonable. It sounds practical. It's the question that gets asked in the vendor evaluation meeting, in the pilot planning session, in the steering committee kickoff. And it virtually guarantees a disappointing result.

The question is: how do we make our current process work with this new tool?

It sounds like the right question. It frames the technology as serving the business. It respects the existing process. It feels responsible — we're not blowing everything up, we're integrating thoughtfully.

But it starts from the wrong end.

It assumes the current process is correct — that the process requirements are fixed and the tool's job is to slot in around them. That assumption is rarely examined. And when it's wrong, every subsequent decision compounds the error.

What Happens When You Ask the Wrong Question.

The wrong question produces a specific and predictable outcome. The tool gets integrated into the existing process at the point of least resistance. A step gets added to the flowchart — "consult AI," "run through the model," "check with the system." The people doing the work adjust slightly. Some time gets saved on specific tasks. The process structure remains intact.

This gets called transformation. It isn't.

What it is, more precisely, is a hotline. You added a resource to an existing process. The process still requires someone to remember the resource exists, know what question to ask, interpret the response, and decide what to do with the answer. The accountability structure is unchanged. The approval layers are unchanged. The incentive environment is unchanged.

You gave the organization a better tool for executing the same flawed process at the same speed with slightly less friction in one specific place.

The dishwasher was not designed to be a better pair of hands. It was designed to make hands irrelevant to the problem. Most AI implementations are designing a better pair of hands.

The Lindy Trap.

Your current process has been running for years. Maybe decades. It has been refined, documented, trained on. People have built careers around executing it well. Its longevity feels like proof of its fitness.

But survival in the old environment doesn't mean optimality in the new one.

The process was designed around the constraints that existed when it was built — the tools available, the data accessible, the speed of information, the cost of computation, the skills required to execute. Most of those constraints have changed. Many have collapsed entirely. But the process didn't update because nobody questioned whether it should. It just kept running.

The test. Take any recurring process in your organization and ask: if you were designing this from scratch today — knowing what's now possible with AI, automation, and real-time data — would you design it this way? If the honest answer is no, you're not looking at a process that needs a better tool. You're looking at a process that needs to be rebuilt from the requirement up.

The Right Question.

The right question is harder. It requires setting aside the existing process entirely — not permanently, but long enough to ask what the process actually needs to accomplish.

What outcome does this process exist to produce?

Not what does it do. What does it need to produce. The decision it needs to enable, the value it needs to generate, the problem it needs to solve. That's the requirement. The process is just the current solution to that requirement.

Once you have the requirement, the next question is: given everything available today — humans, AI agents, automation, real-time data, any combination — what would be the most effective way to produce that outcome?

Wrong questionRight question
How do we make our current process work with this new tool? Starts from the process. Assumes the structure is correct. Produces incremental improvement at best — dysfunction at speed at worst.What does this process need to accomplish — and given everything available today, what's the best way to accomplish it? Starts from the requirement. Questions the structure. Produces genuine redesign.

Wrong question versus right question — same situation, two opposite starting points, two opposite outcomes

What Genuine Redesign Actually Looks Like.

Three categories of task — what AI should own, what stays human, what gets eliminated entirely

Some steps get eliminated entirely. Not automated — eliminated. They existed because information was slow, capability was scarce, or approval was required for things that no longer require approval.

Accountability shifts upstream. When AI takes execution, the human role doesn't disappear — it moves. Instead of executing the steps, humans become responsible for defining the parameters, setting the thresholds, deciding what the system should optimize for, and intervening when something falls outside the expected range.

The right actor gets assigned to the right task. Three categories: what AI should own entirely, what should stay human, and what should be eliminated. Most implementations only think about the first.

The Incentive Problem That Undermines Redesign.

The handoff that never happened — the redesigned process reverts within six months because the environment around it didn't change

Genuine process redesign threatens existing accountability structures. When a step gets eliminated, someone who owned that step loses a function. When accountability shifts upstream, someone who was accountable for execution is now accountable for something harder to measure.

None of this is fatal. But all of it is uncomfortable. And in most organizations, the people with the authority to approve a genuine process redesign are the same people whose authority is partly derived from the existing process structure.

This is why the wrong question persists even when leaders understand it's wrong. The right question is organizationally threatening. The wrong question lets everyone keep their function while appearing to embrace the new capability. It's rational self-preservation inside an environment that was never designed to reward genuine redesign.

Which means the process redesign problem is actually an environment design problem.

The Sequence That Works.

  1. Start with the outcome. What does this process need to produce?
  2. Map the requirement. What information, judgment, and action does producing it actually require?
  3. Assign actors to requirements. AI for execution at scale. Humans for judgment under ambiguity. Eliminate everything that serves neither.
  4. Design the environment to support the redesigned process. This is the step most implementations skip — and why the redesigned process reverts within six months.
  5. Then select the tools. The tools are the last decision.

The organizations seeing transformational results from AI aren't the ones that found better tools. They're the ones that asked a better question before selecting any tools at all.

Does this pattern show up in your organization? The Environment Design Assessment measures five dimensions of organizational alignment. It takes eight minutes and tells you specifically where the design was left to chance.
Take the Assessment →

Common Questions

What is the wrong question most organizations ask before AI deployment?
"How do we make our current process work with this new tool?" It sounds responsible. It assumes the existing process is correct and asks the tool to slot in around it. It produces incremental improvement at best — and dysfunction at speed at worst.
What is the right question?
"What does this process need to accomplish — and given everything available today, what is the best way to accomplish it?" Starts from the requirement, not the existing flow. Produces genuine redesign rather than a hotline grafted onto an unchanged process.
What is the Lindy Trap in process design?
The longer a process has been running, the more legitimate it feels — even when the constraints that produced it no longer exist. Survival in the old environment is not the same as optimality in the new one.
Why is process redesign actually an environment design problem?
Because genuine redesign threatens existing accountability structures, and the people with authority to approve it often derive part of their authority from the existing structure. The right question is organizationally threatening; the wrong one lets everyone keep their function. That dynamic is an environment design problem.
What is the sequence that actually works?
Start with the outcome. Map the requirement. Assign actors to requirements (AI for execution at scale, humans for judgment under ambiguity, eliminate everything else). Design the environment to support the redesigned process. Then — and only then — select the tools.
Related Reading
Does this pattern show up in your organization? The Environment Design Assessment measures five dimensions of organizational alignment. It takes eight minutes and tells you specifically where the design was left to chance.
Take the Assessment →