Why Most Workday Operating Models Look Fine on Paper and Fail in Practice
- 2 days ago
- 9 min read

Six months after go-live, most Workday teams are in trouble and don't realise it yet.
The implementation partner has rolled off. The internal team has been told to "run the system." Feature releases are arriving twice a year whether you're ready or not. Business partners are submitting enhancement requests faster than anyone can triage them. And the operating model that looked sensible during hypercare is quietly buckling under the weight of real operational demand.
I see this pattern repeatedly. An organisation invests millions in a Workday implementation. They design an operating model during the final months of the programme, usually under time pressure, usually based on assumptions about steady-state workload that turn out to be wrong. They staff it, stand it up, and move on. Within six to twelve months, the model is failing in ways that don't show up on any dashboard but are obvious to anyone working inside it.
The system isn't broken. The operating model is. And the difference between organisations that get sustained value from Workday and those that spend years just keeping the lights on comes down to how that model is designed, governed, and adapted.
What "Failure" Actually Looks Like
Workday operating model failures are rarely dramatic. Nobody sends an email saying "the operating model has collapsed." Instead, the failure is slow, cumulative, and easy to rationalise.
Decision rights blur. A compensation change that should require HR leadership approval gets made by an HRIS analyst because the approval process takes too long. Nobody notices until the downstream reporting is wrong. Ownership becomes implied instead of explicit. An integration starts failing intermittently. Is it the HRIS team's problem? IT's problem? The AMS partner's problem? The implementation partner who built it but rolled off eight months ago? The answer is unclear, so the issue gets escalated, discussed, and deferred while the business works around it manually.
Everything urgent flows to the same three or four people. These are the individuals who were deeply involved in the implementation and understand how the system actually works. They become the default escalation path for every problem regardless of whether it falls within their remit. Their capacity is consumed entirely by reactive support, which means the optimisation work, the feature adoption, the process improvements that would reduce the reactive workload never gets done.
Long-term improvements lose to short-term fixes. The team knows they need to redesign the absence configuration to handle a new policy, but they're too busy patching the current setup every pay period. They know the security model needs rationalising, but they can't free up the time because they're managing access requests manually. The backlog of improvements grows. The operating model calcifies around workarounds.
I worked with an organisation eighteen months post-go-live that had accumulated over 140 open enhancement requests. Their Workday team of six was spending roughly 80% of their time on break-fix and BAU support. The remaining 20% was absorbed by semi-annual feature release testing. Zero capacity was allocated to optimisation. They were running a multimillion-pound platform at a fraction of its potential because their operating model had no structural provision for anything beyond keeping the system running.
The Root Cause: Models Designed for Steady State in a System That Never Sits Still
The fundamental flaw in most Workday operating models is that they're designed for a world that doesn't exist. They assume a steady-state workload where the team's primary job is to maintain the system as it was delivered at go-live.
Workday doesn't work that way.
Twice a year, a feature release arrives containing hundreds of changes. Some are cosmetic. Some affect core business processes. Each one needs to be reviewed, tested, and either adopted or deferred. That alone represents a significant recurring workload that most operating models underestimate.
Beyond feature releases, the business itself is changing. Regulatory requirements shift. The organisation acquires a company or divests a division. A new CHRO arrives with different priorities. A compensation review reveals that the current configuration doesn't support the new pay philosophy. A country expansion requires payroll and benefits in a jurisdiction the system wasn't configured for.
Each of these changes requires Workday configuration work, testing, change management, and governance. If the operating model was designed for "business as usual," none of this capacity exists. So it gets borrowed from the support function, which degrades support quality, which creates more reactive work, which consumes more capacity. The spiral is predictable and, once established, remarkably difficult to break.
The organisations that sustain value from Workday post-go-live are the ones that designed their operating model for continuous change from the start. They built in capacity for three distinct workstreams, and they protect that capacity structurally rather than hoping it survives contact with daily operational demands.
The Three Workstreams Every Operating Model Needs
The most effective post-go-live Workday teams I've worked with organise their capacity around three distinct workstreams, each with its own governance, prioritisation, and resource allocation.
Run: keeping the system stable and supported. This is break-fix, BAU configuration changes, access management, payroll support, reporting requests, and incident resolution. It's the workstream most operating models are designed for, and the one that expands to consume all available capacity if it isn't bounded. The key discipline here is defining what "run" actually includes and, more importantly, what it doesn't. If run absorbs enhancement work and feature release testing, those activities will always lose to the next urgent payroll issue.
Improve: optimising what exists. This is the workstream that delivers the return on your Workday investment over time. Redesigning a business process that's creating manual workarounds. Enabling a feature that was deferred during implementation. Automating a report that three people currently build manually every month. Rationalising the security model so access requests don't require a four-step approval chain. These improvements are rarely urgent, which is exactly why they need protected capacity. If they compete for resources with run activities in a single backlog, they will always be deprioritised. The organisations that get this right ring-fence a minimum percentage of team capacity, typically 20-30%, for improvement work and treat that allocation as non-negotiable.
Evolve: expanding how the business uses Workday. This is new module adoption, new country rollouts, new integration builds, and strategic changes to how the platform supports the business. Evolve work is project-shaped rather than operational. It has a defined scope, a timeline, and resource requirements that sit outside BAU capacity. The critical governance question here is how evolve work gets funded and staffed. If it's expected to come from the existing Workday team's capacity on top of their run and improve responsibilities, it won't get the attention it needs and it'll degrade performance across all three workstreams.
These three workstreams aren't a nice framework for a slide deck. They're an operational discipline. The moment you stop protecting the boundaries between them, run consumes everything and the organisation stops getting incremental value from a platform it's paying significant licence fees to operate.
Where Accountability Breaks Down Post-Go-Live
During implementation, accountability is relatively clear. The SI owns delivery. The client owns decisions. The programme director or PM coordinates between them. Governance structures exist. Steering committees meet. Someone is watching.
After go-live, that clarity evaporates. The SI leaves. The programme governance structures are disbanded. The Workday team is absorbed into HRIS, IT, or a shared services function. Accountability becomes a function of organisational chart proximity rather than deliberate design.
The most common accountability gaps I see in post-go-live operating models fall into predictable categories.
Cross-functional ownership. Workday doesn't respect organisational boundaries. A single business process might involve HR, Finance, IT, and Payroll. When that process breaks or needs changing, who owns the decision? If the answer requires convening a meeting of four department heads, the decision will take weeks. If no one owns it, the HRIS analyst will make a configuration choice that may or may not align with what Finance or Payroll needs.
Vendor management. Most organisations engage an AMS (Application Management Services) partner post-go-live to supplement internal capacity. The relationship between the internal team and the AMS partner needs explicit governance: who triages work, who approves configuration changes, who is accountable for quality, who manages the AMS partner's backlog and priorities. Without that clarity, the AMS partner optimises for ticket throughput rather than outcome quality, and the internal team spends as much time managing the vendor as the vendor saves them.
Feature release governance. Twice a year, someone needs to review the release notes, assess impact, coordinate testing, and make adopt-or-defer decisions for each relevant feature. On paper, the Workday team owns this. In practice, many of the decisions require business input (should we adopt the new absence feature or keep the current configuration?) and testing requires cross-functional coordination. If the governance for feature releases isn't explicit, the default is to defer everything, which means the organisation falls further behind the platform's capabilities with every release cycle.
Roadmap ownership. Who decides what the organisation does with Workday next year? Not at a tactical level, but strategically. Which modules get adopted? Which processes get redesigned? Which integrations get rebuilt? If no one owns the Workday roadmap, the platform becomes a static system that degrades in relevance over time as the business changes around it.
The Role Most Operating Models Are Missing
Across all of these accountability gaps, there's a common thread: the absence of someone who owns the Workday outcome across vendors, teams, and time horizons.
Not a ticket manager. Not a system administrator. Not a single-function lead who happens to have "Workday" in their title. Someone whose explicit accountability is ensuring the organisation gets sustained value from its Workday investment.
This role needs to be able to ask the questions that don't have an obvious owner. Who owns this risk? What assumption are we making about capacity that hasn't been validated?
What breaks if we approve this enhancement request without assessing the downstream impact? Is this aligned to the roadmap we agreed with the steering group, or are we drifting?
When that role doesn't exist, decisions default to whoever is closest to the problem or loudest in the room. Enhancement requests get approved based on who submits them rather than strategic priority. Configuration changes get made without impact assessment.
The AMS partner's backlog is managed by the AMS partner. Feature releases are deferred by default because nobody owns the adoption decision.
That's not an operating model. That's survival. And survival is expensive when you're paying enterprise licence fees for a platform you're using at a fraction of its capability.
How to Know If Your Operating Model Is Failing
If you're more than six months post-go-live, these are the diagnostic questions that reveal whether your operating model is holding or quietly breaking down.
Is your team spending more than 70% of their time on reactive work? If the answer is yes, your run workstream has consumed your improve and evolve capacity. The operating model has collapsed into a support function, and the optimisation work that would reduce the reactive burden isn't happening.
Do you have a written, current Workday roadmap that your steering group has approved? Not a list of open tickets. A strategic roadmap that defines what the organisation will do with Workday over the next twelve to eighteen months. If this doesn't exist, nobody owns the platform's direction.
Can you name the single person accountable for each of your top ten business processes in Workday? Not the team. Not the department. A named individual. If you can't, your cross-functional accountability is implied rather than designed, and it will fail the next time a process breaks across organisational boundaries.
How are you handling feature releases? If the answer is "we defer most of them" or "we don't have a formal process," you're falling behind the platform with every release cycle. The gap between what your system does and what it could do is widening twice a year.
Is your AMS partner's work prioritised by your team or by their own backlog management? If the AMS partner is self-directing, you've outsourced prioritisation to a vendor whose incentive is throughput, not outcome alignment.
Honest answers to these five questions will tell you more about your operating model's health than any maturity assessment framework.
How we help at 360 HCM
Operating model design and governance is a natural extension of the programme oversight work we do through COMPaaS. Many of our clients engage us during implementation and continue into post-go-live precisely because the transition from programme to operations is where most accountability structures break down.
We help organisations design operating models that are built for continuous change rather than steady state. We establish the governance structures that protect capacity across run, improve, and evolve workstreams. We define accountability clearly, especially in the cross-functional and vendor management areas where ambiguity causes the most damage. And we provide the experienced oversight that ensures the operating model adapts as the organisation's needs change, rather than calcifying around the assumptions that were made at go-live.
We do this effectively because we've seen what happens when operating models are left to evolve on their own. They don't evolve. They erode. And the cost of that erosion compounds quietly until the organisation is paying millions in licence fees for a system that's being used as an expensive record-keeper.
Where to Start
If any of the patterns in this post feel familiar, or if you're approaching go-live and want to design your operating model before the pressure of production forces you into reactive mode, start with a conversation.
Our free programme risk review covers operating model readiness as part of its assessment. It's a focused 30-minute discussion that will surface the structural gaps most likely to cause problems in your first twelve months of live operation. Within 24 hours, you'll receive a written findings summary and our top three recommendations.

Comments