top of page

Who's Watching the Watchers? Why Workday Governance Decides Your Go-Live Outcome

  • Feb 25
  • 7 min read

Workday Governance overseeing the project, the vendor, and the timeline

Every Workday programme starts with good intentions. Strong executive sponsorship. An experienced systems integrator. A signed statement of work that feels comprehensive. A timeline that looks achievable.


And yet, many of the most painful failures I've seen didn't come from bad technology or resistant end users. They came from a simple question that no one asked early enough: who is actually protecting the client's interests once the project is underway?


Not who is running the plan. Not who is billing the hours. Not who is delivering the configuration. Who is watching the watchers?


Governance is not Bureaucracy - It's Risk Management


Governance gets a bad reputation in programme delivery. It's often framed as overhead - something that slows teams down, adds meetings to already packed calendars, and creates reporting that no one reads.


That's a fundamental misunderstanding of what governance does on a Workday programme.


Governance is not about meetings. It's about control. It's not about reporting. It's about accountability. It's not about mistrust. It's about clarity.


Without strong governance, decisions drift. Scope erodes quietly - not through dramatic change events, but through small concessions made in design workshops that no one tracks cumulatively. A business process gets added here. An integration requirement gets expanded there. Each one feels reasonable in isolation. But six months later, the programme is three months behind and £400K over budget, and no one can point to the moment it went wrong - because there wasn't one moment. There were two hundred small ones.


Good governance creates friction early so you avoid failure later. It forces decisions to be documented, assumptions to be validated, and trade-offs to be made consciously rather than by default.


Your Implementation Partner is not your watchdog.


This is the part that makes people uncomfortable, but it matters.


Your implementation partner is incentivised to deliver the work they're contracted to deliver, to the timeline they committed to. They're measured on utilisation, margin, and throughput - and they have new projects waiting to be staffed with those same consultants. That doesn't make them bad actors. It makes them a business.


But it does mean several things that rarely get acknowledged:


They will not slow things down to protect you if slowing down means their consultants sit idle for two weeks while your team catches up on decisions they haven't made. They're not incentivised to challenge optimistic timelines you approved, because those timelines determine their revenue forecast. They're not incentivised to highlight downstream risk if raising it threatens delivery momentum - especially if the risk sits in a phase they won't be responsible for.


I've seen this play out dozens of times. An SI programme manager raises a concern in an internal meeting. Their engagement partner overrules it because escalating the issue might threaten the client relationship or trigger a difficult conversation about additional budget. The concern gets documented in an internal risk log the client never sees. Three months later, it becomes a production defect.


When a single vendor controls the plan, the resources, the estimates, and the change narrative, governance becomes performative instead of protective. The steering committee receives green status reports. The executive sponsor feels confident. And beneath the surface, risks accumulate without scrutiny.


That's not oversight. That's theatre.


Client-Side Advocacy functions like insurance


Think about how you insure anything that matters.


You insure your house not because you expect it to burn down, but because you understand the cost if it does. You insure your car not because you plan to crash, but because risk exists regardless of how carefully you drive.


A client-side advocate on a Workday programme functions the same way. They're there to ask the questions others aren't incentivised to ask. They're there to slow decisions that feel rushed - like approving a change order in 48 hours because the SI says it's "blocking delivery" when in reality it's blocking their preferred sequencing. They're there to challenge scope trades that look harmless today but create technical debt that explodes six months after go-live.


They're independent. They sit on the client side of the table. And most importantly, they answer to outcomes, not billable hours.


The difference is tangible. I was brought into a programme where the SI had submitted a change order for £120K of additional integration work, justified by a "newly discovered technical dependency." When I reviewed the original SOW and the design documents, the dependency wasn't new at all - it was documented in the SI's own architecture review from month two. The change order was the SI correcting their own underestimation and passing the cost to the client. Without someone on the client side who knew where to look and what to look for, that change order would have been approved in the next steering committee as a matter of course.


That single intervention paid for the entire engagement.


What happens when no one is watching closely


Here's what I see repeatedly when governance is weak or symbolic:


Timelines compress without revisiting assumptions. The programme falls behind in design because business decisions take longer than the SI estimated. Rather than extending the timeline and having an honest conversation with the sponsor, the SI compresses testing to protect the go-live date. The plan still shows the same end date, but the testing window that was originally eight weeks is now four. No one formally approved that trade-off because no one presented it as a trade-off. It just happened.


Design compromises get deferred as "phase two items" that never return. During design workshops, the SI recommends simplifying a complex business process to keep the programme on track. The business reluctantly agrees, on the understanding that the full requirement will be addressed in phase two. But phase two hasn't been funded, scoped, or committed to. It exists only as a verbal promise in a workshop. Twelve months later, the business is running a workaround they were told would be temporary, and there's no budget or appetite for a phase two that was never formally planned.


Change orders feel reactive instead of strategic. A change order lands for £80K of additional data migration work, justified by "unexpected complexity in the legacy system." But legacy data complexity is never unexpected - it's one of the most predictable risks in any Workday deployment. An experienced governance lead would have insisted on a data assessment in the first month and built contingency into the migration estimate. Instead, the client is paying for the SI's optimistic scoping.


Executives hear green while delivery teams feel red. This is the most dangerous pattern. The SI's status report shows the programme on track. The PM's weekly update is optimistic. But in the delivery trenches, the integration team knows they're behind. The testing team knows the scripts aren't ready. The data migration team knows the data quality is worse than anyone's acknowledged. These signals exist - they're just not reaching the people who need to hear them because the reporting structure filters reality rather than transmitting it.


None of this happens all at once. It happens quietly. Incrementally. Reasonably.


Until cutover weekend. Until payroll parallel. Until first close.


That's when governance failures surface. And by then, the options are limited and expensive.


What Strong Governance Actually Looks Like


Strong governance doesn't mean mistrusting your implementation partner. It means balancing the room.


It means someone who can say "that risk needs to be documented and owned, not just acknowledged" - and has the authority to ensure it happens. Someone who can say "this dependency isn't assigned to anyone, and if it slips, it takes the integration timeline with it." Someone who can say "we're trading long-term stability for short-term speed — is that a conscious decision the sponsor has approved, or is it happening by default?"


It also means someone who can translate between executives and delivery teams without filtering reality. The sponsor needs to know that testing is behind - not wrapped in qualifiers and caveats, but stated plainly with options and recommendations. The delivery teams need to know that the sponsor's timeline expectation hasn't changed - and what that means for their workload and priorities.


That translation role is rarely fulfilled by the vendor. The SI's programme manager is structurally conflicted - they need to keep their own leadership happy while managing the client relationship. And it shouldn't be left to a client PM who is already consumed by day-to-day task coordination and doesn't have the bandwidth or the seniority to force governance conversations at executive level.


It requires someone whose sole purpose is protecting the client's outcome. Someone with enough Workday delivery experience to recognise the patterns, enough authority to act on them, and enough independence to tell the truth when the truth is uncomfortable.


The Cost of Governance Is Predictable.

The Cost of Not Having It Is Not.


Strong governance feels like an investment. Weak governance feels cheaper - until it isn't.


The costs of inadequate governance don't appear in the original business case, but they appear in real life: rework that consumes months of effort. Extended dependency on the SI's application management services because the system wasn't implemented correctly the first time. Delayed value realisation because the go-live was pushed back - or worse, because the go-live happened on schedule but the system wasn't ready. Internal teams burned out by a programme that demanded more of them than anyone planned for. Executive confidence eroded to the point where the next transformation initiative faces institutional resistance before it even starts.


The most successful Workday programmes I've been part of had one thing in common. Someone was explicitly accountable for protecting the client's outcome. Not delivering configuration. Not chasing tasks. Not billing hours. Protecting the programme.


How we deliver this at 360 HCM


This is the core of what COMPaaS was built for. Our programme oversight service provides fractional, senior governance leadership that sits entirely on the client side of the table. We establish the governance structure, lead steering committees, manage scope boundaries, challenge change orders with an insider's understanding of how they're constructed, and ensure that what reaches the executive sponsor is reality - not a filtered version of it.


We do this because we've sat on the other side. We've been the SI engagement partners managing margin and utilisation. We've seen how governance gets diluted when the same organisation responsible for delivery is also responsible for oversight. And we believe there's a better model - one where the client has their own experienced voice in the room from day one.


Where to Start


If you're planning a Workday programme and governance isn't yet defined, or you're already in a programme and the patterns described above feel familiar, start with a conversation.


Our free programme risk review is a focused 30-minute discussion about your programme's governance structure, current risks, and upcoming milestones. Within 24 hours, you'll receive a written findings summary and our top three recommendations.


No obligation. No sales pitch. Just an honest assessment from someone who has spent 25 years watching what happens when governance works - and what happens when it doesn't.





Comments


bottom of page