Note: This is Part 2 of a planned 3-part blog series on artificial intelligence (AI).
In Part 1 of this series, we made the case that urgency without deliberateness is how organizations end up making their most expensive AI mistakes. This is where that argument gets specific. Because for most organizations, these conversations eventually arrive at the same crossroads: should we build this ourselves or buy a platform?
It sounds like a procurement question. It is not.
It is a commitment to years of development, ongoing maintenance, domain expertise that takes time to build, and engineering capacity that cannot be spent on anything else. The leaders I speak with who have been through this once almost always say the same thing:they overestimated what the AI could do on its own and underestimated what it would take to make it work reliably over time. That gap, between what building appears to cost and what it actually costs, is what this piece is about.
The Real Conversation Happening in Boardrooms
The conversations I am having with R&D, IT and operations leaders across manufacturing and consumer goods tend to follow a recognizable pattern. It starts with genuine excitement: we want AI-powered product development. It quickly runs into a hard question: but how do we ensure the data is actually right?
Consider the multi-layer calculation chain required to produce a compliant Nutrition Facts Panel. You start with ingredient-level nutrition values, scale each ingredient by its formula percentage, apply moisture loss calculations, convert to per-serving values, calculate Daily Value percentages using the correct regulatory reference amounts, and then apply jurisdiction-specific rounding rules — which differ between FDA, Health Canada, NOM-051, and EU standards. A small error at any step compounds downstream.
An LLM equipped with a calculator tool can perform arithmetic. But which rounding rule applies to calcium at a given concentration? Which supplier-specific nutritional profile should be used for this particular grade of modified starch? For a formula with 40 ingredients and nested sub-formulas, every orchestration step the AI takes is a potential point of failure. In consumer applications, those failures are inconveniences. In regulated manufacturing, they are compliance events.
Consider a different kind of failure — one that is not about calculation precision but about data connectivity. A supplier notifies you that a key ingredient is being discontinued. The immediate business question is straightforward: which of our products are affected? But answering it requires tracing a chain — from ingredient to formula, formula to finished good, finished good to packaging configuration, packaging configuration to regulatory claim. If those relationships are not explicitly defined in your underlying data model, the AI cannot trace that chain. It will return a confident answer. That answer will be incomplete or wrong. And you will not find out until you have a production issue or a compliance event.
The same problem plays out proactively: a regulation changes and a specific input is restricted or banned. Which SKUs are at risk? Which suppliers need to be contacted? Without connected data, that question takes weeks of manual work by domain experts to answer with any confidence. Auditors — whether internal, third party, or regulatory — consistently flag this as one of the most common and costly gaps they find.
In both cases the AI is not the problem. The absence of a connected data foundation is. And no amount of model sophistication compensates for relationships that were never defined in the first place — a point we explored in depth in a recent post on why most AI strategies have a data problem they do not see coming.
Why “We Can Build This” Is a Harder Argument Than It Sounds
When technical leaders work through what a truly compliant AI-powered system requires, I often hear: we could build this. And technically, many of them could. The question is what they are actually committing to.
Building a precision AI platform for regulated manufacturing means building a deterministic calculation engine that encodes FDA 21 CFR 101.9 rounding rules, Health Canada daily value references, NOM-051 requirements, and EU 1169/2011 standards — and maintaining it as regulations change. It means constructing a many-to-many specification data model where a single ingredient update automatically propagates across every formula and finished good that contains it. It means developing an ingredient database with validated USDA/FDA nutritional profiles and a system for ingesting supplier-specific data. It means building a claims validation engine that checks nutrient content claims against calculated nutrition values in real time.
Solving these requirements typically takes several years of development investment and significant ongoing maintenance — and that timeline assumes you have the right domain expertise in the room from the start. The data models, the validation logic, the regulatory rules, the entity relationships — none of these are designed by engineers alone. They require deep collaboration with the domain experts who understand how these workflows actually operate in practice: what the exceptions are, where the edge cases live, and what the system needs to handle that no generic training corpus would ever anticipate.
Organizations that build custom AI solutions often discover they have created a system they must now staff to maintain indefinitely. Regulations change. Supplier data changes. The underlying models change. Each of those changes requires engineering attention, often permanently. That diverts capacity from core business initiatives, creates organizational dependency on internal tools that are difficult to deprecate or upgrade, and turns what looked like a one-time investment into an ongoing operational commitment that compounds quietly over time. This is what I call the maintenance trap — and it is the cost that almost never appears in the original business case.
Rethinking the Question
Before leaders answer the ‘build or buy’ question, they need to consider and answer a more fundamental question: what would you actually be building? And is building that infrastructure and maintaining it as regulations evolve, data changes, and your portfolio grows the best use of your organization’s engineering resources?
For most organizations, the honest answer is no. Not because their teams lack capability, but because the maintenance trap is real, the domain expertise gap is real, and the opportunity cost of engineering capacity committed indefinitely to internal tooling is real. There are use cases where building is exactly the right answer — workflows so specific to how your organization operates that no vendor will ever solve them well. But those cases are narrower than most organizations assume before they start.
Which brings the decision to its logical conclusion: if you are going to buy, you need to know what you are actually buying. Because in a market where every vendor claims AI, most are selling something far thinner than it appears. That is what Part 3 of this series is about.
In the meantime, download my latest AI Executive Brief to learn more about my perspective on why most AI projects fail or contact our team to learn more about our spec-first approach.
Explore More Blogs
Get Started
With Specright’s Solution Suite, you can digitize, centralize, and link your specification data to drive efficiencies, intelligence, traceability, and collaboration within your organization and across your supply chain network.



