Find common ground, not common tools
A practical guide to aligning teams around common practices and golden paths
I was speaking with a CIO last week about their desire to transform their business away from individual, discrete projects towards something that fosters re-use and greater collaboration across teams. It’s a common ask, so I thought I’d share my notes from that meeting.
Over the past decade, the IT industry has seen developers as the new kingmakers and has done all it can to boost productivity of these teams. A large part of the Agile movement was to empower development teams to choose their own ways of working that will maximise value in their specific environment. Meanwhile, DevOps provided explosion of tooling, much of it solving the same problems in slightly different ways. The problem is that whilst we were optimising these local systems, organisations often lost track of the bigger picture and we now find ourselves with fragmented, “islands of value”.
The challenge presents initially as a tool consolidation exercise. But how do we convince developers who are perfectly happy with what they're doing to coalesce around a central set of tools?
Know where you’re going
Every transformation must begin with its “why”. Before you consolidate anything, you must get alignment on what success actually looks like. What are the benefits to the business? to teams? to individuals? Beyond shallow anecdotes and costs, why should we all go through the pain of migration, learning new tools, and making compromises?
The metrics you choose should map directly to the outcomes the business expects from this change. Typically, those fall into a few categories:
For delivery speed, DORA metrics are the starting point — deployment frequency, lead time for changes, mean time to recovery, change failure rate. They’re straightforward to implement and widely understood. SPACE is more thorough, but it’s not easy, and “not easy” often means “never actually adopted.” Start where the friction is lowest and adapt from there.
For productivity, look at DevEx framework. How fast are your feedback loops? How much cognitive effort does it take to get things done? In order for any change to deliver value is that the people involved must perceive improvement.
For efficiency, you need to look more holistically at a value stream map of the end-to-end flow of work. Organisations are very good at drawing boxes connected by lines, and those within the boxes are generally good at improving things within their four corners. In my experience, the biggest efficiencies are found in the lines, the handoffs between teams.
For improved security and compliance, track vulnerability trends across the whole portfolio, not team by team. A system breach is rarely one catastrophic failure; it’s typically a chain of seemingly innocuous issues that nobody saw in combination.

Find common ground, not common tools
We all have our preferences on ways of working and the best tool for the job, but compliance can’t be optional, and every team or project must meet a baseline standard, regardless of how individual teams prefer to work. As a business, what are our non-negotiables? What rules are sufficiently steeped in best practice and common sense that nobody can deny them?
I suggest the following as a good guide to SDLC best practice:
No credentials in code. Accidentally committing secrets to a repository is one of the most common and most damaging security incidents in software development. The fix is automatic scanning that blocks the commit before it lands, not a policy that relies on developers remembering.
No direct changes to code. Protected branches ensure nothing reaches main without going through a merge request. No exceptions, no shortcuts under delivery pressure.
Nobody ships without a second pair of eyes. Mandatory code review before anything merges to main is basic engineering hygiene.
Deployments to production require explicit approval. In highly regulated environments, compliance may require that the person who writes the code cannot be the only person who decides it goes live. This is separate from code review (which is a quality control); this is separation of duties.
Licence compliance runs automatically. Most teams have no visibility into the licence obligations sitting inside their dependency tree. Flag incompatible licences, block the genuinely problematic ones. A software audit from KPMG is a lot less fun than it sounds.
Mandatory security vulnerability scanning. Both your code and its third-party dependencies bring risk. Scanning must be automatic and enforceable, not optional per-project configuration that teams can quietly skip. You may additionally agree that code cannot be promoted to production with known vulnerabilities.
Infrastructure is code and must be treated like it. If teams are writing Terraform, Helm charts, or Kubernetes manifests, a misconfiguration is as dangerous as an application vulnerability. IaC scanning catches overly permissive IAM roles or open security groups before they’re deployed.
Every project produces a Software Bill of Materials (SBOM). A full inventory of every component in a delivered artefact. You can’t manage what you can’t see, and increasingly, your customers and regulators will demand this.

Making the right thing, the easy thing
Individual teams reinventing the pipeline is waste that accumulates silently. Most organisations have unknowingly invested an enormous amount of engineering time in standard plumbing. Most of the development teams I’ve met would like to think they’re special, but the truth is that the process of building, testing, and deploying software a solved problem.
Platform teams should build components that development teams can adopt and combine to create a “golden path” for software delivery that is faster, easier, and better than anything a team would build on their own.
Standard project scaffolding means every new project starts from a known-good baseline. Correct structure, pre-wired pipeline, compliant from day one. A developer spins up a new project and it already has scanning, testing, and deployment configured. No tickets, no waiting, no dusty PDFs files to read.
Reusable pipeline components. Build, test, scan, and deploy solved once, used everywhere. Teams shouldn’t each be independently solving the same CI problems. When the platform team improves a component, every consumer benefits automatically. This is the compound interest of platform thinking.
Managed, shared CI infrastructure eliminates the overhead of teams maintaining their own runners. Shared, platform-managed compute is more reliable, more efficient, and easier to audit.
Controlled dependency sources provide a managed, local cache of dependencies, rather than teams pulling directly from the public internet.
The psychology here matters as much as the technology. Golden paths work when they’re genuinely better than the alternative. If the platform offering is slower, more complex, or less capable than what a team can build themselves, adoption will be a constant battle. The platform team has to think like a product team; developers are customers, and will vote with their feet.
Breaking the stalemate
When I walk through this framework with organisations, the most common reaction isn’t disagreement, it’s “we know all this, we just haven’t done it”.
The gap between knowing and doing is almost always organisational, not technical. What's missing is someone able to step above a tool consolidation exercise to define the non-negotiables and hold teams to account. One the “do nothing” option is removed, building re-usable components becomes the obvious answer.
The goal isn’t uniformity for its own sake. It’s giving teams the freedom to focus on what makes their work unique by removing the need to reinvent everything that doesn’t.
What would your engineering organisation look like if every team could focus entirely on their domain, knowing the platform had everything else covered?



