Beyond the Blueprint
The cost levers nobody budgets for, and the ones nobody pulls after go-live
Roger Tchalla | March 2026
Two months before phase one go-live, a Canadian government agency running a large-scale S/4HANA Selective Transformation could not answer a basic question: are we ready?
The architecture decisions had been sound. Selective Transformation with a clear scope boundary. Reasonable SI model. Solid data strategy. The program had already spent well beyond its original budget. And yet, with eight weeks to go, nobody in the room could confirm the status of individual workstreams, quantify the remaining risk, or say with confidence whether the go-live date was real. The governance structure that should have been providing that visibility for months had not kept pace with the program.
The response was the only one available at that point: an urgent, expensive onboarding of an army of external consultants to review every workstream, assess readiness, confirm go or no-go, supervise the go-live itself, and stabilize the system through the chaotic weeks that followed. The cost of that intervention dwarfed what continuous governance would have cost from the beginning. The program eventually went live. But the price of arriving there without visibility was paid several times over.
This is not an unusual story. It is the default story. And the reason it keeps happening is that the industry frames ERP cost management as a pre-project exercise. Choose the right implementation path. Set the right architecture. Pull the right levers before delivery starts. That framing is correct, but it is incomplete. It accounts for six of the ten levers that determine total transformation cost. The other four are invisible in most business cases.
The full map
Last year, I co-authored a piece with colleagues at BCG Platinion identifying six architecture levers that account for the majority of controllable ERP spend. Those levers are real and important. But they only cover decisions made before delivery starts.
The complete picture has three phases and ten levers.
10 levers - From Architecture to Maintenance and Evolutions
The architecture levers are set once. The delivery levers compound every week. The operations levers run indefinitely. This article covers levers 7 and 8. The next covers 9 and 10.
Lever 7: Delivery governance
A scope decision is made in a design workshop but not formally documented. Three weeks later, a related requirement surfaces. Because the original decision is invisible, the team treats it as new scope. A change request is raised. The change request triggers integration rework. The rework delays testing. The testing delay compresses cutover preparation. Six months after a single undocumented decision, the program is four weeks late and $2M over budget.
This is the pattern. Not a technology failure or an architecture mistake. A visibility failure. The blueprint was fine. The construction management was not.
The most expensive programs I have seen were not the ones that chose the wrong implementation path. They were the ones that chose the right path and then could not govern the delivery.
The pattern repeats everywhere: RAID logs updated reactively by the person with the least context. Steering packs assembled the night before from stale data, forcing the committee into reviewing status instead of making decisions. Change requests impact-assessed through hallway conversations. Test readiness demonstrated through spreadsheet exercises that consume days and still leave gaps. Each failure is small. Collectively, they account for 20 to 60% of unplanned cost.
Why it persists: intelligence vs. judgement
Governance has been difficult to maintain because it requires two fundamentally different types of work, and until recently, both were done by the same person.
Maintaining a RAID register, tracking test coverage, monitoring data migration readiness, estimating change request impact: this is intelligence work. Verifiable, repeatable, structured. It does not require a senior practitioner. It requires consistency, every day, without gaps.
Deciding what to escalate, whether a go-live date is real or aspirational, how to frame a risk for a skeptical board, when to intervene with an underperforming SI: that is judgement work. It requires experience and pattern recognition built across dozens of programs.
When intelligence work and judgement work are performed by the same person, the intelligence work always wins the calendar. The judgement work suffers in silence until it surfaces as a crisis.
The program director who should be making scope trade-off decisions spends Sunday evening assembling a status pack from three spreadsheets. The delivery lead who should be pattern-matching across workstream risks is manually reconciling test readiness evidence. The senior consultant who should be diagnosing whether the SI relationship needs intervention is updating a RAID log that nobody reads until it is too late.
Purpose-built AI can now handle the intelligence layer as a continuous, verifiable service. A RAID log maintained in real time from live program data is not a faster version of a weekly manual update. It is a structurally different tool. A steering pack generated from current data and organized around decisions is not a faster status report. It is a functional transformation of the steering committee from a reporting audience into a decision-making body.
When the intelligence layer runs continuously, senior practitioners are freed to do the work only they can do. That is where the real value of experienced practitioners lives, and it is precisely the work that gets crowded out when those same practitioners are consumed by intelligence tasks.
Lever 8: Organizational readiness
On a large S/4HANA program I was involved with, the training budget was cut by 40% in month eight to offset a scope overrun in integration testing. The go-live date held. The system worked. And in the first ninety days of production, the support desk logged more than 800 tickets, the majority of which were users unable to perform basic tasks the system was configured to handle.
The investigation cost ran to six figures. The root cause of nearly every ticket was the same: the organization was not ready to use what had been built.
Most business cases treat training and change management as a project cost to be minimized. The budget line is set as a percentage of total program spend, typically 5 to 10%, and it is the first line cut when the program is over budget. The implicit assumption is that organizational readiness is a deliverable you produce near the end and hand to the business.
That assumption is wrong, and the cost of getting it wrong does not appear in the project budget. It appears in the 24 months that follow.
Lever 8 has three components. Training depth: how many users, how many hours, in what format. Under-investing here saves 2 to 5% of program cost and generates 3 to 10 times that amount in post-go-live support spend. Change management scope: stakeholder alignment, communications, resistance management, executive sponsorship. The cost of getting this wrong is measured in adoption velocity, how quickly users move from the old way of working to the new one. User enablement design: embedded help, job aids, super-user networks, ongoing reinforcement. This is the component most often entirely absent from business cases. Formal training produces short-term knowledge. Enablement infrastructure produces sustained capability.
The intelligence/judgement split applies here too. Training needs analysis, change impact mapping, and readiness measurement are intelligence work. Designing a training approach for a resistant population, managing executive sponsors through the difficult middle months, and deciding whether the metrics mask underlying resistance: that is judgement work. The intelligence activities are increasingly automatable. The judgement activities are not.
The causal chain
Levers 7 and 8 are not isolated. There is a direct chain between them and what happens after go-live.
When lever 7 is weak, undocumented decisions produce a system that diverges from its design intent. Those divergences surface as production incidents. When lever 8 is weak, under-trained users produce a workforce that cannot operate the system as intended. Those capability gaps surface as support tickets.
In both cases, the investigation cost falls on the client. In both cases, the answers already exist in the project record. The client is paying to rediscover what was already known. Levers 9 and 10, the subject of the next article, address this directly.
The AI acceleration trap
A brief but important note on AI in this context. Three distinct categories of AI are now being deployed across ERP programs, and confusing them is dangerous.
Execution-layer AI accelerates the technical build: code intelligence, fit-to-standard analysis, automated code generation. It compresses the direct cost of delivering the architecture. Governance-layer AI addresses lever 7: continuous risk visibility, automated decision traceability, steering support. It compresses the indirect cost of poor oversight. Adoption-layer AI addresses lever 8: training content generation, change impact mapping, readiness tracking. It compresses the cost of preparing the organization.
All three reduce total cost. Deploying one without the others creates a specific trap.
The most dangerous ERP programs of the next three years will not be the slow ones. They will be the fast ones: programs that execute at AI-augmented speed while governing at human-manual speed.
Execution AI without governance makes the program build faster while scope drift compounds faster. Code ships before the governance structure can verify alignment with the blueprint. The steering committee, still working from manually assembled packs, cannot keep pace. Execution AI without adoption investment delivers a modern system to an unprepared organization. The go-live date arrives with a well-built product and a user base that is not ready for it.
The organizations that will achieve the best cost outcomes deploy all three. The tools that accelerate the build are powerful. But they do not govern the build, and they do not prepare the organization. Those remain separate disciplines, and the seventh and eighth entries in the cost equation.
What happens after go-live
The six architecture levers determine what you are building. Levers 7 and 8 determine whether you build it on time, on budget, and with an organization that can use it.
But most organizations treat go-live as the finish line. It is not. The decisions made during the project will explain 80% of what breaks in the first 24 months. The workforce that was prepared, or underprepared, during delivery will determine how many of those issues require expensive external investigation.
Most organizations enter that phase with no structured mechanism to manage it. The project team has rolled off. The documentation is archived. The institutional memory that would connect today’s production incident to last year’s design decision has evaporated. The client pays consultants $250 to $1,000 per investigation to rediscover what the project already knew. And that cost runs not for weeks, but for years.
The next article examines how to change that.