Last year, a regional HVAC equipment manufacturer in Nebraska ran a 90-day AI pilot. They licensed a platform, stood up a chatbot, and connected it — loosely — to their Dynamics AX instance. The goal was to help service techs pull part numbers and warranty status from the field without calling the office.

Ninety days later, the pilot was quietly shelved. Not because the technology failed. Because nobody had mapped out which data lived where, the techs didn't trust answers they couldn't verify, and the platform vendor had moved on to their next sale.

This story is not unique. It's close to the rule.


The Pilot Trap

According to a 2024 survey by LNS Research, over 60% of manufacturing AI initiatives never make it past the pilot phase. A separate analysis by McKinsey found that companies running disconnected AI proofs-of-concept captured less than 10% of the expected value compared to those that deployed into live workflows.

The problem isn't the technology. The problem is the approach.

Most pilots are designed to impress — to produce a demo that looks good in a boardroom. They're not designed to survive contact with real operations: a dispatcher juggling 14 open work orders, a plant manager who needs the answer in 30 seconds, or a parts coordinator whose ERP screen still runs on Internet Explorer.

What Goes Wrong (In Plain Terms)

1. The data isn't where they said it was.

Your ERP has been running for 15 years. Data is split across three modules, two spreadsheets on the plant floor server, and a SharePoint folder nobody's touched since 2019. Pilots that don't account for this hit a wall fast.

2. Nobody owns the output.

An AI tool that surfaces a suggested answer is only useful if someone trusts it enough to act on it. That trust doesn't come from a demo — it comes from accuracy over time, and from techs and coordinators being part of the rollout, not subjects of it.

3. The workflow integration is fake.

Copy-pasting an answer from a chatbot into your ERP is not automation. It's a different kind of manual work. Real value comes from systems that write back — that close the loop inside the tools your people already use.

What Actually Works

The manufacturers and field service ops teams we've seen succeed share one pattern: they started with a specific, painful workflow — and they finished it.

Not a broad pilot. Not a platform evaluation. A single workflow, owned by a real person, connected to their actual data, with a measurable outcome: hours saved per week, emergency orders avoided per month, tickets resolved without escalation.

One industrial distributor we worked with focused exclusively on parts lookup triage — the 20 minutes a dispatcher spent per ticket pulling status from three screens. After six weeks, that workflow ran in under three minutes. The dispatcher didn't need to be told it was working. She felt it on the first Thursday after go-live.

That's the bar. Not "the pilot was successful." Not "stakeholders are excited." Did someone's day get measurably better?

Before You Run Another Pilot

If you're considering an AI initiative — or recovering from one that stalled — ask three questions before spending another dollar:

  1. Which single workflow, if fixed, would save the most time or prevent the most costly mistakes?
  2. Where does the data for that workflow actually live, and can a system reach it?
  3. Who on your team will own the result after the vendor leaves?

If you can answer all three, you're ready to move. If you can't, that's exactly what needs to get sorted first.

Ready to Find Out Where You Actually Stand?

The AI Discovery Sprint is a focused 2–3 week engagement where we identify your highest-value workflows, assess AI feasibility, and deliver a prioritized roadmap with ROI estimates — not a generic deck, a plan your team can act on.