The 90% Problem
Projects get to 90% done and stay there. Not because people are slow — because nobody owns the outcome, only the tasks.
One person uses ChatGPT to draft everything. Another person rewrites every AI sentence manually. A third refuses to touch it.
The team is producing more, but work isn't shipping faster. Drafts pile up in review. Quality varies enough that no one's quite sure what the standard is anymore.
This is the pattern in most B2B marketing teams right now. AI accelerated production. It exposed every unclear handoff, every undefined standard, every missing decision owner.
When asked what the rules are, no one has a clear answer.
The problem isn't AI. The problem is the absence of execution standards around it.
AI makes drafting faster. What took two hours now takes twenty minutes.
But here's what didn't change: someone still needs to decide if the draft is good enough. Someone still needs to ensure it sounds like your company. Someone still needs to approve it.
Those steps didn't get faster. In some cases, they got slower.
Why? Because now there's a new question in every review: "Did we check this properly, or did we just trust the AI?"
Quality varies. Widely. Some outputs nail the brief. Others miss the tone entirely. Some are factually wrong. Some are surprisingly good. No one knows how to evaluate which is which consistently.
Voice and tone drift. AI doesn't naturally match your brand. Some people edit heavily. Some don't. The result: content that sounds like it came from three different companies.
A draft comes back from AI. It's maybe eighty percent there. But who decides if eighty percent is publishable? Who owns making it ninety-five? Who ensures voice stays consistent across everything you ship?
These questions don't have answers. So everyone makes their own call.
The Slack thread shows the pattern:
"Can we use AI for this?"
"I think so?"
"What about accuracy?"
"Someone should check it."
"Who?"
Silence.
Production accelerated. Execution didn't.
Here's what actually happened when AI arrived.
You didn't implement AI. Your team implemented dozens of individual AI systems.
Every person developed their own approach: their own prompts, their own quality bar, their own sense of what's acceptable. None of it documented. None of it consistent. These are shadow processes – invisible until something breaks.
When a piece of content ships that doesn't sound right, you can't trace why. Was the AI prompt wrong? Did someone not edit enough? Did review miss it? Was it approved too quickly?
The system is invisible. So you can't fix it.
This is the cost of adoption without structure. Not policy documents. Not AI ethics frameworks.
Structure: clear answers to what AI does, how quality gets checked, and who owns the standard.
Without that, AI doesn't accelerate execution. It fragments it.
Standards aren't about control. They're about clarity.
Three questions need answers before AI creates value instead of friction:
What: Where does AI fit in your workflow?
Not "Can we use AI?" but "Where does it make sense, and where doesn't it?"
For most teams, AI works for first drafts rather than final copy, research and summarization, reformatting existing content, and brainstorming. It doesn't work for brand-defining content, anything requiring deep company context, or sensitive communications.
The line isn't arbitrary. It's based on risk and the cost of getting it wrong.
How: What does output need to pass through before it ships?
Standardize the input to stabilize the output.
When every person writes their own prompts, quality varies by who created it. When the team uses standard prompts for common tasks, consistency becomes predictable. The same input structure produces comparable output across blog drafts, social posts, and email subject lines.
This isn't restriction. It's reduction of variance.
New team members don't invent their own approach. They use what already works. The learning curve flattens. Quality checkpoints sit at defined moments: voice review, fact-check, final approval. Clear steps. Clear owners.
Who: Who owns the standard?
Someone owns brand voice compliance. Someone owns accuracy. Someone owns final sign-off. Not a committee. One name per step.
AI drafts. Human refines. One person ensures it's consistent with everything else you ship.
When structure exists, the workflow holds. Standard input produces a draft. Review refines for voice and accuracy. The content lead closes the decision. It ships or it doesn't. No one's guessing if it's okay. The standard is known.
Here's what lack of structure actually costs.
Voice varies by who used AI and how much they edited. You stop sounding like one company. Prospects notice. They might not articulate it, but they feel it.
Every piece becomes a debate. Should we have used AI here? Is this good enough? Who should check it? The questions compound. Review slows. Shipping slows.
Without a standard, quality becomes subjective. What one person thinks is publishable, another thinks needs work. Nothing settles. The bar moves depending on who's reviewing.
And underneath all of it: the person who uses AI heavily thinks the person who rewrites everything is wasting time. The person who doesn't trust AI thinks the heavy users are cutting corners. Neither is wrong. The system is.
These costs don't show up in productivity dashboards. But they appear in every review cycle, every approval delay, every piece that ships and doesn't quite land.
AI sped up drafting. It exposed every unclear handoff, every undefined standard, every missing decision owner.
Without standards, every person has their own AI approach. Quality varies by who created it. When something doesn't sound right, no one's sure where it broke down. The team second-guesses every piece.
When standards exist, the team knows where AI fits. Output goes through the same review steps every time. Voice stays consistent. Errors get caught before publishing.
When something breaks, the fix is clear. Either the prompt needs work, or the review step needs more attention. The system improves instead of repeating the same failure.
Uncertainty drops. Shipping speeds up. Quality becomes predictable.
Don't try to standardize all AI use at once. That creates a policy no one follows.
Start with one use case, the most common one. Probably blog drafts or social content.
Define the execution standard for that use case. What prompt produces consistent output? Who reviews for voice and accuracy? Who makes the final call before it ships?
Document it. One page. Make it accessible.
Run it for two weeks. See where it works and where people work around it. If people skip review because it takes three days, review needs to move faster. If they skip the prompt because it doesn't work, create a better one.
Then add a second use case. Repeat.
After three use cases, you'll have a pattern. That becomes your approach.
The first time someone's draft gets sent back for not following the standard, there's pushback. After the third time, people use the standard. Quality stabilizes. Friction drops.
Twenty-page AI policy documents don't work. Full of abstractions about ethics and responsible use, too distant from "I need to draft a blog post today." No one reads them.
Telling people to "use AI responsibly" doesn't work either. Everyone interprets that differently. It creates no standard.
Banning AI doesn't work. People use it anyway, they just don't tell you. Now you have no visibility into how it's being used.
And making standards optional defeats the purpose. Standards that people can opt out of under deadline pressure aren't standards. They're suggestions.
What works: clear answers to what, how, and who. Simple enough to follow when deadlines press.
Drafting stays fast, but now shipping keeps pace with it.
The team stops debating whether AI is appropriate for a given task. They know where it fits and where it doesn't. Quality becomes predictable because the review steps are the same every time. Voice stays consistent because one person owns that standard.
New people onboard faster. The standards are documented. They use what already works instead of inventing their own approach.
Leadership stops worrying about something embarrassing shipping. The system catches it before it does.
AI becomes what it should be: a tool that accelerates work without creating new uncertainty at every step.
A Diagnostic Sprint identifies where AI is creating friction vs. where it's adding value. Often, the issue isn't the tool. It's the absence of execution standards around when to use it, how to review it, and who owns quality.
The output isn't a policy framework. It's clarity on where AI fits in your workflow and what needs to change for it to accelerate shipping, not just drafting.