I don't publish client work. Diagnostics are confidential by default — clients share sensitive operating reality, and that stays between us.

Below is an illustrative example that shows the depth, structure, and voice of an Engineering Scale Diagnostic without exposing anyone's organization. It is not from a specific client. It is a representative example of how I write, what I look for, and how I prioritize findings.

Fictional context: Series B marketplace, 55 engineers organized into 6 squads, delivery slowing after a platform rearchitecture. Founder-CEO engaged the Diagnostic six months after the rearchitecture shipped, when the next roadmap slipped by nine weeks.

Delivery has slowed because the rearchitecture shifted ownership boundaries faster than the org structure absorbed. Three squads now sit on overlapping slices of the payments domain, each believing they own the primary path. Planning cadence did not adapt: sprint goals are still set per-squad, which amplifies the overlap rather than resolving it. AI tooling has spread unevenly; three engineers are multiplying their output while the rest of the org is producing similar volume at slightly lower quality, widening the variance band.

The fastest unlock is a clean ownership split of the payments domain, paired with a cross-squad planning rhythm on the two shared integration points.

1. Payments ownership is contested. Three squads each have documented ownership of parts of the payments flow. The rearchitecture created this accidentally by splitting what used to be one service. Every cross-squad change now requires coordination no one planned for.

2. Planning cadence is misaligned to the new boundaries. Sprint goals are still set per-squad, inherited from the pre-rearchitecture structure. Cross-squad dependencies are tracked in a shared doc that is updated once per sprint. Most of the roadmap slip traces to this single mismatch.

3. AI usage is driving variance, not leverage. Three engineers (two on one squad, one on another) have integrated AI tooling into their daily workflow and are producing roughly 2x their previous output. The rest of the org has not. Review load on the high-output engineers is climbing, and code quality on the high-volume PRs is slipping because reviewers are stretched.

4. Manager span is uneven. Two engineering managers carry eight direct reports each. Two others carry three. The uneven span is creating different 1:1 cadences, different career development depth, and visible attrition risk on the overloaded teams.

Week 1. Split the payments domain along a single clear axis: auth-to-checkout vs. settlement-to-reconciliation. Assign each half to one squad. Document the integration contract between them in the shared engineering wiki. No further cross-squad work on payments until the split is agreed.

Week 2. Stand up a weekly 30-minute cross-squad planning rhythm for the two payments squads and the one platform squad that owns the shared integration points. Agenda is fixed: dependencies for this sprint, dependencies for next sprint, open contract questions.

Week 3. Rebalance manager spans. Move two direct reports from each overloaded manager to the two underloaded managers. Document the rationale and handoff plan. Do not change reporting lines mid-sprint; target the next sprint boundary.

Week 4. Pilot an explicit AI usage norm: high-output engineers document their AI workflows once, and the rest of the org opts in to replicate. Measure review turnaround and defect rate across the month. Do not mandate adoption; measure voluntary uptake as a signal.

If the payments split happens but the planning rhythm doesn't, the split won't hold: teams will revert to informal coordination and the contested ownership will re-emerge within two sprints.

If manager span is rebalanced without a real handoff conversation, trust with the moved reports will erode. This is the single highest-risk move in the plan.

If AI usage norms are mandated rather than invited, the three current power users will either disengage or stop sharing their workflows. The variance will widen, not narrow.

The variance problem in this fictional org is the most common AI adoption pattern I see. Tooling is spreading through the team the way consumer apps do: early adopters find it, early majority adopts slowly, laggards ignore it. The org-level effect is that output variance widens before it converges.

The durable fix is to treat AI workflow documentation as a shared engineering practice, not as individual productivity. This takes 4–6 weeks to show up in throughput metrics and 2–3 months to show up in quality metrics.

This is an illustrative example. The company, the numbers, and the findings are fictional. Real Diagnostics are tailored to the specific organization's structure, history, and signals. The depth and format are representative of what I produce; the content is not.

I do not publish client work. If you engage me, your findings stay between us.

Interested in a real Diagnostic? Start a conversation →