We introduced AI tools to the delivery team in Q2.

6 project managers. The same tool, the same training, the same prompts. I tracked the outcomes over the following six months with more attention than anyone knew I was paying.

By month 4, a pattern had emerged that I had not expected and could not explain at first:

The PM who had always struggled with structure was producing better plans, cleaner risk logs, and more consistent client communications than I had seen from her in two years.

The PM who had always been exceptional the person I would put on the most complex programme without hesitation had started producing work that felt different. Less considered. More templated. Faster, but somehow less right.

The AI had not made the team uniformly better. It had made one person significantly better and one person measurably worse. And the mechanism, once I understood it, was identical in both cases.

Today’s Menu

🚨 The problem: The amplification mechanism — why AI does not improve delivery management, it amplifies whatever is already there

💸 What it costs: What I observed specifically in both PMs over six months of AI tool adoption

The fix: The governance structure that captures the upside without the risk

⚠️  Why AI improves some people and degrades others

The mechanism is not complicated once you see it. AI tools for project management do one thing extremely well: they apply consistent structure to unstructured thinking. They take whatever the PM brings to them and organise it, format it, and surface it in a professional form.

For a PM who struggles with structure, who has good instincts and relevant experience but produces inconsistent outputs, this is transformative. The AI provides the scaffolding that was always missing. The PM's underlying capability now has a reliable vehicle. The output improves dramatically.

For an exceptional PM whose value is not structural competence but the specific, nuanced judgement that comes from deep contextual understanding, the same scaffolding becomes a liability. The AI produces a structured, professional output that reflects the template more than it reflects the situation. Under time pressure, the PM accepts the output rather than interrogating it. The work becomes less specific, less contextual, less right.

AI amplifies existing patterns. It makes organised thinkers more organised and scattered thinkers more organised. But it also makes nuanced thinkers less nuanced because it replaces contextual judgement with structural consistency.

What the struggling PM gained

What the exceptional PM lost

Consistent structure in every risk log and status report

The specific contextual framing that made her reports immediately actionable

Complete action registers after every meeting

The non-obvious connections between meeting items that required genuine synthesis

Professional tone in difficult client communications

The precise language choices that reflected deep knowledge of this specific client

Speed in producing planning artefacts

The slow, considered quality that made her plans genuinely reliable under uncertainty

🤔  Quiz

You are introducing AI writing and planning tools to a team of six project managers. Which of the following is the single most important thing to establish before rollout?

A)  A standard prompt library so the team uses the tools consistently
B)  A time-tracking mechanism to measure productivity improvement
C)  A mandatory review gate every AI-generated output must be reviewed by the PM before sending, with at least one specific contextual addition
D)  A training programme on how to use the tools effectively

👉  Answer at the end of this issue

💡  The fix

Three governance practices that let you capture the upside of AI in delivery management without the specific risk it creates for your most capable people.

  Fix 1: The tiered AI use protocol — match the tool to the task, not the person

Not all PM tasks carry the same contextual risk. The key is matching AI involvement to the nature of the task:

High AI value — structural tasks

Low AI value — contextual tasks

Meeting notes and action extraction

Client communications during sensitive moments

Standard status report formatting

Risk assessments requiring situational judgement

Template-based planning documents

Stakeholder updates during a programme crisis

Compliance documentation

Programme close summaries requiring honest reflection

The tiered protocol does not restrict AI use — it specifies where AI output can go directly to a template and where it must be substantially rewritten from the AI draft. The distinction is simple: if getting it wrong would damage a relationship or miss a critical nuance, it is a contextual task.

BEFORE

No tiered protocol. AI tools used uniformly across all task types. Exceptional PM produced a programme close summary that read like a template. Client noticed. First negative comment on written communication quality in 2 years.

AFTER

Tiered protocol introduced. Close summary identified as a contextual task. PM required to write from scratch using AI as a research aid only. Client response: "Best close summary we have ever received from an agency partner."

  Fix 2: The contextual quality gate — one question before any AI output leaves the team

A single question, applied to every AI-generated output before it goes to a client, a governance forum, or a proposal:

"If I removed every sentence that could have been written about any programme by anyone and kept only the sentences that could only have been written about this specific programme, at this specific moment, with this specific client — what would remain?"

The answer should be most of the document. If it is fewer than three sentences, the document is a template. Templates communicate that the author has not thought specifically about the reader's situation. In delivery relationships, this is the fastest way to erode the quality of engagement at the moments that matter most.

BEFORE

No quality gate. AI-generated risk log sent to client in month 4. The client replied, asking if it was a template. It was. A conversation about the quality of programme insight followed.

AFTER

Contextual quality gate was introduced. Review time per document: average 8 minutes. Output quality rated by the programme director in blind review improved significantly in the first month.

  Fix 3: The capability audit — use AI adoption as a diagnostic, not just a rollout

The most strategically valuable use of AI tool adoption is not productivity measurement. It is a diagnostic. AI adoption reveals which members of your team have strong underlying capabilities that were previously limited by structural weaknesses and which members have strong structural output that was previously masking limited underlying judgment.

The PM who improved dramatically with AI tools was showing you where the ceiling was before AI and where it is now. The PM who declined in quality was showing you that her previous quality was substantially structural and that the structure was carrying more of the work than you had realised.

This information is extraordinarily valuable for development planning, programme staffing, and honest assessment of individual capability. It is the most accurate diagnostic that most delivery teams have ever had access to and most teams use it only to measure hours saved.

BEFORE

AI adoption is treated as a productivity initiative. No capability diagnostic run. One PM's decline was attributed to personal issues. Root cause AI amplification of structural over-reliance was not identified for 4 months.

AFTER

Capability audit run at month 3. 4 PMs showed structural improvement. 2 showed contextual decline. Individual development plans adjusted. Both declining PMs improved output quality within six weeks with targeted coaching.

Task for you

🎯  What to do this week

This week, look at the last three pieces of written work produced by your team that used AI assistance:

  • How many sentences could have been written about any programme, by anyone, anywhere?

  • How many sentences could only have been written about this specific programme, by someone who actually knows it?

The ratio between those two numbers is the most accurate measure of whether AI is helping your team think or replacing your team's thinking. Both outcomes are possible with the same tool. The difference is whether there is a governance structure in place that knows which one is happening.

Want the tiered AI use protocol — the task classification framework and the quality gate template?

Reply "amplify" to this email and I'll send it directly to you.

🌐  Around the web this week

⚡  1 tool:  Notion AI with custom templates — the most useful feature for delivery teams is the custom template function that lets you build a prompt structure with mandatory blank fields requiring PM input before the document is considered complete. Forces contextualisation without adding a separate review step.

📊  1 number:  A Stanford study on AI-assisted writing found that users consistently rated AI-generated text as more persuasive and professional than human-written text in blind comparisons — even when the human-written text was more accurate and contextually appropriate. AI outputs can appear more credible than they are. Your review process needs to account for this.

💬  1 quote:  "A tool is only as good as the person using it." The interesting inversion for AI delivery tools: the person producing the best outputs with the tool is not necessarily the most capable person — it is the one whose underlying judgement is most clearly visible in the output despite the tool.

👉  Quiz answer

C — a mandatory review gate with at least one specific contextual addition

Option A produces the amplification problem. Consistent prompts produce consistently structured outputs that reflect the template, not the situation. Option B measures the wrong thing. Speed is not the value. Quality is. Option D is necessary but insufficient without the structural protection.

Option C addresses the root cause directly. The problem is not AI-generated content; it is unreviewed AI-generated content. The "at least one specific contextual addition" requirement forces the PM to engage with the output rather than forward it. That engagement is where their judgment re-enters the process.

👉  Quiz answer

The PM who declined in quality with AI tools is now one of the strongest communicators on the team. The coaching conversation that followed the capability audit was uncomfortable — she had not known that her previous quality was partially structural — but it produced rapid and genuine improvement.

The PM who improved dramatically is now handling programmes she would not have been considered for 18 months ago. The AI tools gave her the structural platform that her capability had always deserved.

The tool was the same. The outcomes were opposite. The difference was in what each person brought to the tool — and whether the governance structure around the tool made that difference visible quickly enough to act on.

Think of the strongest PM on your team. Have you noticed any change in the quality or specificity of their written outputs since AI tools were introduced? What might that change be telling you?

Hit reply. I read everything.

That’s it for this week.

Keep showing up, keep cheering each other on and as always, run happy! 🏃‍♂️

P.S: Details in this issue have been changed to protect client confidentiality. The situation and the lesson is real

New here?

If this issue is named something you have been watching but could not describe, forward it to one person who needs to read it.

Keep Reading