Dominic Jainy is a seasoned IT professional whose expertise spans the transformative landscapes of artificial intelligence, machine learning, and blockchain. With a career dedicated to bridge-building between complex technical architectures and real-world industrial applications, he has become a leading voice in how organizations can leverage emerging technologies to solve operational friction. His insights into ERP strategy emphasize that technology is never just a software installation; it is an evolution of how a business breathes and scales.
In this discussion, we explore the critical intersections of process alignment and system configuration, moving beyond the surface-level mechanics of ERP deployment. We delve into the dangers of “shadow systems,” the balance between standardized templates and unique business nuances, and the strategic importance of reviewing core workstreams like “Order to Cash” before a single line of code is configured. Dominic provides a roadmap for ensuring that digital transformations result in reliable data and high user adoption rather than fragmented manual workarounds.
When an ERP system functions technically but users still rely on spreadsheets and manual workarounds, what specific operational disconnects are usually at play? How can leadership identify these “shadow systems” early on, and what are the first steps to reintegrate those processes into the main platform?
These “shadow systems” typically emerge when the initial implementation strategy prioritizes software configuration over the actual human workflow of the business. You see this manifest as a visceral frustration among staff who feel the system makes their jobs harder rather than easier, leading them to retreat into the comfort of a familiar spreadsheet. Leadership can spot these early on by looking for inconsistent data across departments or noticing that reports seem to lag behind the physical reality of the warehouse or the production floor. The first step toward reintegration is not a technical fix, but a structured process review to identify where the system’s logic clashes with daily operations. By mapping these bottlenecks and acknowledging where the software feels “overly complicated,” you can begin reconfiguring the platform to support—not hinder—the people using it.
Packaged implementation models offer predictable costs and fast timelines, yet they often overlook unique business nuances. How can organizations balance the need for a quick deployment with a deep process review to ensure that standard configuration templates actually fit their real-world operational reality?
The allure of a packaged model lies in its 100% predictable pricing and defined timelines, which is incredibly tempting for a mid-sized business moving away from legacy accounting software. However, the balance is struck by insisting on a “business-first” phase before any of those standard templates are toggled on. You must treat the ERP as an operational transformation project rather than a technology project, taking the time to ask which processes must scale and which current steps are actually just manual rework. Even within a fast-tracked deployment, carving out space for a structured review ensures that the standardized flows—like order entry or purchasing—don’t become a straitjacket for a company with a unique way of handling its inventory or customer service.
Successful strategies prioritize workstreams like “Order to Cash” and “Plan to Produce” before any software is configured. Could you walk through a step-by-step process for reviewing these workflows to uncover bottlenecks, and how does this prevent the need for costly, unnecessary system customizations?
A robust review starts by gathering the actual practitioners of these workstreams—the people in procurement, production, and finance—to walk through every step from “Receive to Ship” or “Record to Report.” We look for sensory details, like where a planner is manually re-entering data or where a physical movement in the warehouse isn’t reflected in the digital record. By identifying these friction points early, we can often streamline the manual process itself before it ever reaches the automation stage. This preventive measure is vital because it allows us to align the business to the software’s capabilities, which dramatically reduces the need for expensive, bespoke code changes that often break during future updates.
In complex environments involving subcontracting or specialized quality inspections, standard configurations often create friction for the workforce. What specific steps should be taken to map these intricate production details into the system so that warehouse teams and planners do not revert to tracking movements externally?
In manufacturing, the devil is always in the details, such as multiple staging areas, project-driven assembly, or the specific coordination required between production and shipping. To prevent planners from reverting to external spreadsheets, you must map these “sub-steps” into the system’s routing and location logic during the design phase. If a quality inspection happens between two production stages, that must be a formal milestone in the ERP, not a note on a piece of paper taped to a pallet. When the system accurately reflects the physical movement of goods—including those subcontracted steps—the workforce begins to trust the data, and the urge to maintain a “side system” for approvals or coordination naturally vanishes.
When operational reporting becomes inconsistent or untrustworthy after go-live, what is the typical root cause within the initial design phase? What metrics or validation steps should a company use during implementation to guarantee that their final data outputs will be reliable for executive decision-making?
The root cause of untrustworthy reporting is almost always fragmented data caused by processes occurring outside the system; you cannot report on what you haven’t captured. If users are bypassing the ERP for inventory tracking or using side systems for approvals, the executive dashboard will never show the full picture, leading to a total lack of trust in system data. To prevent this, companies should use “process integrity” metrics during the testing phase, ensuring that every physical action has a corresponding digital transaction that balances out. Validation shouldn’t just be about whether a report runs, but whether the data in that report matches a physical count or a bank statement down to the penny.
What is your forecast for Business Central implementation strategies?
I forecast that the industry will move away from “feature-led” implementations toward “outcome-driven” strategies where the software configuration is secondary to operational alignment. As Business Central continues to grow as the operational backbone for mid-market companies, the successful players will be those who treat the ERP as a living map of their business rather than a static database. We will see a significant rise in the use of automated process mining tools to identify those “shadow systems” before they become entrenched habits. Ultimately, the companies that achieve the highest return on investment will be the ones who realize that the most powerful tool in their arsenal isn’t the code itself, but a deep, documented understanding of how their business actually works from the ground up.
