From

Patrick Kübler

February 25, 2026

15 minutes

Industrial Engineering

Why waiting for high data quality is a bad strategy—and why you should start AI automation in industrial engineering now.

Why waiting for high data quality is a bad strategy—and why you should start AI automation in industrial engineering now.

Three monitors, one problem: on the left a technical drawing, in the middle an Excel sheet with cycle times, workstations, and tools, on the right a simulation tool that models assembly and material flow. And in between—me, the engineer, moving information from A to B instead of planning.

For years, I worked as a mechanical engineer designing production and assembly systems: planning specific product variants to minimize waiting time for assemblers, optimizing machine utilization to reduce setup times, and redesigning or building factories. And I repeatedly saw how much of my (engineering) time disappeared into administrative work.

In this article, I’ll show which parts of industrial engineering can realistically be automated today—and where AI still hits limits without clean data and governance (spoiler: later than most people expect).

From customer projects in mechanical and plant engineering and from conversations with more than 20 companies over the past weeks, one pattern keeps coming up (see also my post with the key findings: https://www.wailand.io/blog/20-interviews-in-6-wochen): the bottleneck is often not on the shop floor, but before it—especially in industrial engineering, work preparation, and manufacturing and assembly planning. When teams there can’t keep up, customer orders stall—leading to unhappy customers and, in the worst case, threatened revenue. But why is that?

I. Why industrial engineering becomes the bottleneck

I’m a fan of describing the problem clearly first and only then choosing the right solution (and not: “Let’s run workshops and find out where we can use AI.”). That’s why it’s important to distinguish two areas: what can fundamentally be derived because rules and parameters exist—and what is truly new because customer requirements, geometry, or process requirements are not (yet) captured in a rule set.

1) Case 1: Configurable, but not end-to-end automated

In mechanical engineering—even for special-purpose machines—the configurable share of a customer variant (or at least the share that can be combined from existing modules) is often larger than the part that truly needs new design or major adaptation. The exact split varies by product, maturity of the modular kit, and customer requirements—but the trend matters: the larger share is “known” in content, yet it is not automatically carried through into manufacturing and assembly planning.

What happens next is paradoxical: although the variant is largely defined, automation stops in CPQ or an “Excel monster.” Routings/work plans, assembly documents, and inspection planning are then recreated—often by copy/paste from references plus manual adjustments. The process is not manual because it has to be technically, but because system boundaries (CPQ/PLM/ERP/MES), missing data standards, and missing versioning break end-to-end continuity. If, for example, a routing template was revised but it is not clearly traceable whether a specific order uses the old or the new version, automation cannot derive plans safely—and the planner intervenes manually again.

2) Case 2: No rule set — but still reusable

The non-fully configurable share is often dismissed as “not automatable.” That’s too simplistic. Rules may be missing for truly new content—but there is almost always historical knowledge: similar parts, similar customer requirements, previous routings/work plans, feedback data from production, proven inspection characteristics. The problem is rarely “no knowledge,” but “knowledge not findable”: data is scattered across ERP, PLM, CAD archives, file systems, and people’s heads.

Automation in this second area therefore does not mean “apply a rule set,” but rather: reliably find relevant references, make differences visible, and derive a robust first draft that experts then validate. A CAD similarity analysis can help here, for example, by reliably finding reference parts—initially based on geometric similarity, later expanded with attributes such as material and tolerances, or via similarity search based on customer requirements or product features.

3) Where engineering time really goes

In both cases, I often observe the following engineering tasks that are not directly value-adding:

  • Checking documents and ensuring completeness (often with media breaks between CAD, PLM, ERP, email, and files).


  • Searching for references: similar parts, earlier orders, old routings/work plans, suitable resources—often via grown naming logic in part numbers or personal experience.


  • Considering plant standards: norms, process specifications, inspection requirements, approved machines/tools—often as PDF/Word/Confluence, rarely machine-readable.


  • Selecting machines, tools, and processes: not only “what is possible in general,” but “what works here” (machine park, availability, capability, lot size).


  • Creating, releasing, and transferring routings, assembly sequences, and inspection documents into ERP/MES.

The core: a large part of the time does not go into technical decision-making, but into searching for information, reconciling it, and doing document work. This is especially painful because the value-creating engineering work—process selection, risk assessment, optimization—gets pushed into the last minute.

II. Why previous approaches fall short

1) PLM: manages design data, but the last mile remains open

PLM systems today mainly cover design data: CAD, drawings, EBOM, changes. In practice, end-to-end continuity often ends right there. Manufacturing logic (MBOM, routings/work plans, resources, times, inspection planning) remains organizationally and systemically distributed—partly in ERP, partly in MES, partly in specialized tools and Excel. In that case, PLM helps only to a limited extent because the real planning work still happens across media breaks.

2) Custom integration: expensive from the start—and a permanent construction site

Custom interfaces between PLM/ERP/MES and engineering tools can close individual gaps. But even the initial project is demanding: most software developers and IT consultants bring little experience in the complex world of industrial engineering. Before the first line of code is written, they have to understand the domain relationships—this costs time and money, and the customer pays for that ramp-up. After that, it doesn’t get easier: data models change, processes evolve, systems are upgraded. Integration becomes a permanent construction site.

3) Rule sets: powerful—with two structural limits

Rule sets are an important foundation. Two challenges remain: (a) rule sets often end at the configurator and are not “carried through” into routings/work plans, assembly, and inspection planning. (b) rule sets age and require maintenance. And for truly new content, rules cannot help by definition—there you need reference knowledge and data-driven suggestions.

This isn’t just my observation: when studies and practice reports discuss AI in engineering, the same root cause keeps showing up: distributed data, missing interoperability, and missing end-to-end continuity (here I like to refer to the joint whitepaper by Accenture, DFKI, and Fraunhofer ISST “AI in New Product Development” from 2025).

III. What is realistically possible today

1) Derive the configurable share end to end

If a variant is largely configurable, the result should not end as a “document,” but flow onward as structured manufacturing and assembly information: MBOM, routing/work plan structure, standard operations, variant parameters, inspection characteristics. The goal is not to replace the planner, but to shift the focus from creating to validating generated proposals: the platform automatically generates a consistent first draft—the work preparer reviews exceptions and releases it.

Practically, this means: an integration layer that connects CPQ/PLM/ERP/MES, plus a model that translates rules and parameters into concrete routings/work plans, assembly sequences, and inspection documents. Versioning (which rule applies to which state?) and traceability (why was this operation/variant chosen?) are critical.

2) Support the customer-specific share

For truly new content, 100% automation is the wrong expectation. A realistic approach is: the platform finds reliable references, makes differences explicit, and proposes a first draft. This is a productivity lever because the planner doesn’t start from zero.

A robust reference search must combine multiple dimensions:

  • Geometry: 3D models can be processed so that similar shapes become automatically discoverable (so-called shape embeddings, i.e., numerical representations of geometry)—typically after suitable normalization independent of position and scale and largely independent of the data format (e.g., B-Rep surface models or meshes, i.e., triangle nets).


  • Semantics: shape alone is not enough. Material, tolerance class, surface finish, safety requirements, or lot size can strongly influence process selection.


  • Order history: similar projects provide not only parts, but also proven routings/work plans, inspection characteristics, supplier decisions, and resource/equipment decisions

The value comes from the combination: “similar part” + “similar requirements” + “similar constraints.”

3) Shared building block: from product features to manufacturing logic

Both scenarios require translating customer requirements and product features into manufacturing-relevant characteristics (e.g., holes, fits, tolerance classes, surfaces, materials, welds, assembly features, inspection characteristics). From this, you can derive process candidates—but not as a hard “if-then” automation without context.

Example: a fit H7 is a strong signal that a finishing operation (e.g., reaming, honing, fine grinding, fine boring) will likely be required. Which manufacturing method makes sense depends on diameter, material, upstream process, lot size, quality requirements, and the available machine park. A good platform represents this decision as logic with parameters and constraints—and provides reasoning instead of pretending to have AI certainty.

For standard times, using feedback data is promising in principle—but only if data is segmented cleanly (setup/machining/downtime), outliers are handled, and context features (machine, material, lot size, tool condition) are taken into account. Otherwise you don’t get “precision,” just misleading pseudo-accuracy.

4) Making plant standards usable for AI

Many plants have extensive guidelines: process approvals, inspection specifications, requirements for surface treatment, or approval lists for manufacturing methods and machines. For AI to use these guidelines, so-called RAG (retrieval-augmented generation—AI searches existing documents and grounds its answers in the content) can be used. Three conditions must be met: (1) documents are versioned and uniquely referenceable, (2) every AI suggestion must include the specific source—i.e., which document, which version, which passage—so the engineer can verify instead of blindly trusting, (3) there are approval rules that prevent suggestions from being written into ERP/MES unchecked.

Under these conditions, RAG can deliver real value: engineers no longer have to dig through folder structures and PDF collections, but receive the relevant standard or guideline—with a reliable reference—directly in their work context.

IV. What slows things down in practice

1) Data quality is not a reason to wait

Data quality is often cited as the reason why AI-based automation “doesn’t work for us yet.” I know this—and I’ve also experienced automations where data quality had to be improved first. That happens. But in practice, I often encounter a dangerous narrative: “We have to clean up first before we can start with AI.” What follows are years-long master data projects without a concrete problem focus—and in the end, they rarely produce something that truly enables automation.

The reality is: for many process automations, the existing data is already good enough. The pragmatic path is to start where it already works today—and fix data quality issues when they occur concretely, not when you imagine they might occur someday. If you wait until everything is perfect, you’ll be overtaken by those who start with what they have. This does not mean you should cobble together one-off fixes for every data problem. If early automations show that, structurally, a data platform is missing, then you build it with clean architecture—but driven by concrete use cases from real projects, not artificially brainstormed “data use cases.”

A good starting point is often historical work and assembly plans: they were executed that way, so it’s proven they work. From there, you can iteratively derive better proposals, better time models, and better rule sets.

2) Knowledge graph: the backbone for search and traceability

If product data, manufacturing knowledge, and plant standards live in different systems, there is no shared language connecting everything. A knowledge graph provides exactly that (a network that explicitly models objects and their relationships)—enabling search, explainability, and validation.

Concretely: the graph knows the relevant objects—parts, manufacturing features, operations, machines, tools, plant standards, measurement characteristics—and how they relate: which feature requires which operation, which machine is approved for which material, which manufacturing method has proven itself for similar parts.

This enables three things that are hard without a graph:

  1. Rules and plausibility checks. For example: “Operation X is not approved for material Y”—such checks run automatically before a routing/work plan goes to release.


  2. Explainable AI suggestions. If the platform proposes a routing/work plan, it can explain why: “A similar shaft with h6 and 1m length was turned on this machining center before.” The engineer sees not only the result, but the path to it.


  3. Protection against AI hallucinations. Because suggestions are grounded in linked facts in the graph, the risk of hallucinations decreases significantly—provided the underlying data is correct and the application enforces source/reference requirements.

3) Governance: automation you can control

What happens if an automatically generated routing/work plan is written into ERP unchecked—and the standard time is wrong? Or if an AI suggestion is based on an outdated plant standard that has long been replaced? Anyone using AI in ERP/MES-adjacent processes needs hard guardrails. Otherwise, automation turns into an uncontrollable black box.

Four requirements have proven essential in practice:

  1. Source grounding: every suggestion must reference concrete data or standards, not just “sounds plausible.” If the platform cannot say where a recommendation comes from, it must not be written into a system of record.


  2. Versioning: rules, standards, and derivation logic are clearly tied to a specific version/state. Without this, it’s impossible to trace whether a routing/work plan was created based on current or outdated guidance.


  3. Approval workflows: the higher the risk or the newer the variant, the more human review is required before writing into ERP/MES. Automation needs clearly defined boundaries—new material classes or new manufacturing methods are escalated by default.


  4. Monitoring: if planned times and actual times in production drift systematically, it’s a signal that standard times or rules need maintenance.

V. Three takeaways for your path

1) The fastest lever: use what already exists—end to end

Many teams focus first on “more configuration.” The faster lever is often to use the already configurable share consistently—down to a release-ready routing/work plan, assembly instruction, and inspection plan. Work shifts from creating to validating.

2) “Not automatable” does not mean “not supportable”

The customer-specific share is often dismissed too quickly as “nothing we can do.” Yet there is almost always historical knowledge—similar parts, proven routings/work plans, earlier decisions. Making this knowledge findable and connected gives the planner a solid starting point instead of a blank sheet. It does not replace experience, but it saves hours.

3) The role of the industrial engineer becomes more demanding—not obsolete

AI primarily replaces search and document work—the administrative work that consumes a large share of engineering time today. Human work shifts toward decision-making, risk assessment, optimization, and release. Investing here doesn’t make the engineer obsolete—it makes their work more effective.

Conclusion: three monitors, a lot of administrative work, and in the middle an engineer who has better things to do. It doesn’t have to stay that way. The technical building blocks exist—and those who start assembling them will reach the goal faster than those who wait for the perfect data foundation.

About the author

I’m Patrick Kübler, mechanical engineer and co-founder of wailand. Over 10+ years and in more than 30 industrial projects—from the shop floor to executive level, from mid-sized companies to MDAX groups—I’ve seen how much engineering time is lost to administrative work.

That’s exactly why my co-founder Martin Peters and I built wailand: a platform that automates variant creation end to end in mechanical and plant engineering. Concretely: wailand connects CPQ, PLM, CAx, ERP, and MES via an integration layer, links product and manufacturing knowledge in a knowledge graph, and automates engineering processes on top—from similarity search to routing/work plan generation to release. Every suggestion is traceable, versioned, and grounded in sources.

If you face similar challenges or wish to exchange ideas: Contact me on LinkedIn or at info@wailand.io.

Let’s talk

We work with companies that build highly variant products and want to rethink variant engineering from the ground up. If you’d like to explore where automation can create the biggest impact in your organization, let’s talk.

Let’s talk

We work with companies that build highly variant products and want to rethink variant engineering from the ground up. If you’d like to explore where automation can create the biggest impact in your organization, let’s talk.

Let’s talk

We work with companies that build highly variant products and want to rethink variant engineering from the ground up. If you’d like to explore where automation can create the biggest impact in your organization, let’s talk.