The 90% Problem
Projects get to 90% done and stay there. Not because people are slow — because nobody owns the outcome, only the tasks.
The attribution report says the webinar influenced twelve closed deals last quarter.
The sales team says the webinar had nothing to do with it. Those deals were already in motion. The relationships were built over two years. The webinar was background noise.
Marketing points to the data. Sales points to their conversations.
Neither trusts the other's version. And neither is wrong.
This tension isn't a people problem. It's a structural one, and it's worth understanding why before trying to fix it.
A B2B deal might take nine months. Multiple stakeholders are involved on the buying side, each interacting with different content at different times. A champion reads three blog posts. A technical evaluator attends a webinar. The economic buyer gets a referral from someone in their network who mentioned your name at a dinner six months ago.
Which of these touchpoints "caused" the deal?
The honest answer is that causation in a complex, nonlinear buying journey can't be established cleanly. Most attribution models don't attempt causation. They assign credit to a touchpoint based on a rule: first touch, last touch, or some weighted distribution across the journey.
These models produce numbers. The numbers look precise. But precision and accuracy aren't the same thing.
First-touch attribution credits the blog post someone read eighteen months ago. Last-touch credits the demo request form they filled out yesterday. Multi-touch distributes credit across every interaction. None of these tell you what actually moved the deal forward.
Sales knows this. They live in the deal. They know which conversation shifted the prospect's thinking, which objection almost killed it, which reference customer gave them confidence to sign. None of that shows up in the attribution model.
So they don't trust the dashboard. And when the data doesn't reflect reality as they experience it, they stop using it.
A dashboard nobody trusts doesn't just fail to inform decisions. It actively creates problems.
Marketing optimizes for what the model rewards. If last-touch attribution gives credit to demo requests, marketing invests in driving demo requests. Whether those requests come from prospects who are genuinely ready or people who clicked the wrong button matters less than the number going up.
Sales stops engaging with marketing data. They build their own picture of what's working based on their pipeline conversations. Marketing builds their picture based on the dashboard. The two pictures diverge. The recurring argument about lead quality becomes structural rather than occasional.
Leadership gets a false sense of clarity. The board deck shows pipeline influence across fourteen campaigns. Every initiative looks productive. The actual signal is buried under a reporting structure designed to show activity rather than illuminate decisions.
The cost isn't just analytical. When sales and marketing operate from different pictures of reality, coordination breaks down. Handoffs get harder. Campaign design stops reflecting what sales actually needs in deals. The gap between what marketing produces and what moves pipeline widens quietly over time.
Perfect attribution is the wrong goal. It's not achievable in complex B2B, and pursuing it creates more reporting work than decision clarity.
The more useful question is: what patterns can we learn from across deals, and are those patterns consistent enough to inform decisions?
This shifts the work from proving the cause to recognizing correlation. Not "this webinar caused twelve deals" but "companies that attend our webinars tend to close faster, and here's what we think explains that."
That's a different kind of analysis.
Less precise in the way attribution models are precise. More honest about what the data can and can't tell you. And considerably more useful for making decisions.
Reverse attribution is one way to get there. Start with closed deals from the last two quarters. Map what those companies engaged with before they became opportunities. Look for patterns: content types, topics, timing, channels. Not to prove causation, but to identify what appears consistently enough to be worth investing in.
The patterns won't be perfect.
Some deals will have engaged with content you'd expect. Others won't. But across enough deals, something usually emerges. The companies that engaged with your implementation content closed faster. The ones that attended the executive roundtable had larger deal sizes. The ones that downloaded the pricing guide were further along in their evaluation than their form fill suggested.
These patterns don't replace attribution. They replace the argument about whether attribution is accurate.
The most underused source of measurement insight in most B2B companies is the sales team.
Reps know which content actually comes up in deals. They know what prospects reference in conversations, what objections keep appearing, what questions marketing content doesn't answer, what competitive concerns come up late in the process.
This information rarely makes it back to marketing in a structured way. It lives in call notes, in Slack messages, in the memory of individual reps. Marketing doesn't know to ask for it. Sales doesn't know marketing needs it.
A monthly conversation between marketing and sales, structured around closed deals rather than pipeline metrics, surfaces this. Not a status update. A structured review: ten deals that closed last quarter, what content appeared in each, what sales heard prospects say, what was missing.
One hour. Consistent.
The qualitative picture this produces complements the quantitative dashboard rather than competing with it.
Over time, marketing stops designing campaigns based on what the attribution model rewards. They design campaigns based on what actually shows up in deals. The dashboard becomes one input rather than the only input. Sales starts to trust that marketing is paying attention to their reality.
The gap between the two teams' pictures of what's working narrows.
A shared dashboard works when it's built around questions both teams need answered, not around metrics that marketing tracks and presents to sales. The starting point is a conversation, not a tool selection.
The overlap is usually smaller than expected, and more useful than the full dashboard.
Pipeline created by source. Conversion rate from MQL to opportunity. Deal velocity by segment. Content that appears in closed deals. Four or five numbers that both teams care about and can act on.
Definitions matter more than the numbers themselves. A qualified lead means different things to different people until someone writes down exactly what it means and both teams agree. That agreement is the foundation the shared dashboard sits on. Without it, the same number means different things to the people reading it.
When the shared dashboard reflects questions both teams need answered, built on definitions both teams agreed to, it gets used.
Not because it's mandatory. Because it's useful.
That's the difference between a dashboard that gets presented in meetings and a dashboard that changes what the team does on Monday morning.
A Diagnostic Sprint identifies where measurement breaks down and what sales and marketing can actually agree to track together. Often, the gap isn't technical. It's definitional: what counts, how it's calculated, and whether the numbers connect to decisions or just to reporting.
The output isn't a new attribution model. It's shared visibility into the patterns that matter and a measurement approach both teams will actually use.