The pattern we keep seeing
A VP of Operations at a mid-size company reads three articles about AI productivity gains. He books a call with us. Within five minutes, the question is some version of: "Where should we be using AI?"
That question sounds reasonable. It is the wrong starting point. The companies that see measurable results from AI investments do not start there. They start with a specific, recurring, high-friction workflow and ask whether AI is the right tool for it. Sometimes it is. Often, something simpler fixes the problem faster and costs less to maintain.
The difference in approach sounds minor. The difference in outcomes is not.
Why the backwards approach fails
When you start with the technology and search for use cases, you end up evaluating AI by its novelty rather than its utility. You build things that are technically interesting but operationally marginal. The demo impresses people. Six months later, adoption is low because the system solved a problem the team did not actually feel.
We have seen this across every industry we work in. Healthcare systems that built AI to summarize clinical notes — a feature nobody asked for — while their intake process was still losing patients to 25-minute waits. Manufacturing companies that deployed predictive maintenance dashboards that nobody checked because the operators trusted their own ears more than a model score.
The technology was not wrong. The sequencing was.
The question is not "where can we use AI?" The question is "what takes the most time, creates the most errors, and happens the most often?" Answer that first.
The workflow-first method
Before we touch any technology in an AI engagement, we spend time mapping the actual workflow. Not the documented process — the real one. The one that involves three Slack channels, a shared spreadsheet that gets emailed around, and two people who are the unofficial single point of failure because everyone knows they are the fastest at a particular task.
We look for four things:
- High frequency. A task that happens hundreds of times a week has more leverage than one that happens monthly, even if the monthly task is more painful each time.
- Structured input. AI works best when the input is predictable. Unstructured, chaotic inputs produce unreliable outputs. If the workflow involves pulling information from 12 different formats, the pre-processing problem is often larger than the AI problem.
- Measurable error cost. If mistakes in this workflow have a real, quantifiable cost — rework time, customer impact, compliance exposure — you can build a business case. If the cost is vague, the ROI will be vague.
- Human in the loop tolerance. Some workflows benefit from AI making a recommendation that a human validates. Others need full automation. Knowing which you are building for changes the architecture significantly.
What the mapping exercise produces
A good workflow audit takes three to five business days. At the end, you have a ranked list of opportunities — not by what is technically interesting, but by expected impact per dollar of implementation cost. Most of the time, the highest-leverage opportunity is not the one the client came in expecting to build.
In one engagement, a logistics company came to us wanting to build an AI system to predict delivery delays. When we mapped their workflows, we found that 60% of their customer service volume was queries about order status — queries that could be fully automated with a simple integration that had nothing to do with predictive models. We built that first. It freed up enough customer service capacity to justify the more complex predictive work in a second phase.
The delay prediction system was technically more interesting. The status query automation paid for itself in 60 days.
When AI is genuinely the right tool
We are not AI skeptics. We build production AI systems. But we have earned the right to be specific about when AI adds real value versus when it adds complexity that a simpler system would not.
AI is the right tool when the input is variable and hard to fully enumerate in rules — when you cannot write the logic because the patterns are too numerous or subtle. It is the right tool when the scale of the task makes human review impractical. And it is the right tool when the cost of a wrong answer is acceptable with a human in the loop, because no AI system is right 100% of the time.
It is the wrong tool when the workflow has clear, enumerable rules that a deterministic system handles better. It is wrong when the input data is too inconsistent to produce reliable outputs. And it is wrong when adoption requires trust that the system has not yet earned.
Starting the right conversation
If you are evaluating AI investments, the most useful thing you can do before talking to any vendor is spend two hours with the people who do the highest-friction work in your operation. Ask them what takes the most time. Ask what they check twice because they know it is error-prone. Ask what they would automate first if they had the ability.
Their answers will be more useful than any vendor demo. And if you come to us with that list instead of a general question about where to use AI, the conversation will be sharper, faster, and more likely to produce something that actually gets used.