While the world waits for AGI, let us get B2B funnel metrics optimized in 2026
How we built strategies around database limitations and other constraints, and what should replace them over next 24 months
Something always bothered me about the marketing funnel - the way we built entire strategies around Lead volume, velocity, MQL-to-SQL conversion (or MQA-to-SQA, for my more sensible friends) as if these stages reflected how people buy, when really they just reflected what our systems could track.
Here’s what actually happened:
Relational databases in the early 2000s couldn’t efficiently store unstructured relationship data, so CRM platforms adopted stage gates from traditional sales methodologies as the core data model. We invented discrete stages - MQL, SAL, SQL, Opportunity - because that’s what our systems could process and what aligned with how databases needed to structure information.
Sales teams couldn’t engage with thousands of inbound leads simultaneously, so we created lead scoring that assigned points (50 for a whitepaper, 100 for pricing page visit), and later progressed to more sophisticated scoring, as a prioritization mechanism, even though these scores often didn’t correlate well with actual purchase intent.
Marketing automation platforms had no way to generate personalized content at scale, so we created segmentation frameworks and called them “buyer personas.”
These were reasonable engineering solutions to real constraints. The problem is we forgot they were engineering solutions.
We optimized everything around these constraints. Lead scoring became increasingly sophisticated - ensemble propensity models with demographic, behavioral, engagement, firmographic, and third-party signals (guilty) - but it remained just a prioritization mechanism for scarce sales capacity. Nurture tracks became elaborate multi-touch sequences, but they were still batch processing. A VP of Engineering at a fintech company and a CFO at a healthcare provider would both get the same lead score and enter the same nurture track, receiving identical emails over six weeks even though one cares about technical architecture while the other cares about ROI.
We knew this was suboptimal, but our systems couldn’t handle more. Platforms slowly evolved - we added personas and channel mix, built dashboards to show funnel by those dimensions - but these were incremental and siloed improvements within the same constraint-based framework.
With what I’m seeing at Twilio and elsewhere, I’m hopeful we’ll replace this narrative soon.
AI systems can now maintain complete contextual memory for thousands of accounts simultaneously - every interaction, signal, and conversation across years. They engage at whatever velocity each account needs and generate genuinely personalized narratives. The technical limitations that necessitated the funnel are disappearing.
What I’m most excited about is the shift to continuous relationship orchestration. Instead of “this account hit grade A, route to SDR,” we’re seeing systems that understand complete account context and determine optimal next actions dynamically:
One account needs a technical architecture discussion because their engineering team is evaluating alternatives and their previous vendor implementation failed due to integration complexity.
Another needs CFO-focused ROI content because they just entered budget planning season and historically make purchasing decisions in Q4.
A third needs implementation case studies from their specific vertical because their new VP of Engineering is particularly risk-averse after a bad competitor experience.
A fourth account that’s been quiet for 6 months suddenly shows API documentation traffic from multiple IP addresses - their dev team is actively evaluating, even though no one filled out a form.
The system knows all of this not because someone manually updated Salesforce, but because it’s maintaining continuous context across every touchpoint.
What should we start measuring soon?
Sales and marketing leaders still need to run businesses, forecast revenue, measure productivity, and justify budgets, don’t they? The question isn’t whether we measure, it’s what we measure.
Here are my top metrics and ideas that I think should slowly replace the traditional funnel metrics:
Stakeholder coverage: Are we connected to the right people for this account’s decision process? For enterprise deals, you need procurement, IT, security, finance, and business stakeholders. Coverage metrics should show gaps in relationship mapping. Having a heatmap that shows % of accounts with 1/2/3+ contacts is pretty ‘90s. So is adding random contacts to Salesforce from your whitepaper downloads list.
Account engagement depth: Not “did they download something” but “how complete is our understanding of their buying context?” This should get measured by some sort of context completeness scores - do we know their technical requirements, budget constraints, decision timeline, key stakeholders, and past evaluation patterns?
Intent signal strength: This is already established in most of the larger B2B/SaaS companies as some form of lead scoring model, but would be interesting to have more 3rd party signals go into a real-time composite score - based on actual buying behavior - industry changes, company’s org structure, technical documentation access, multi-stakeholder engagement, pricing page visits, API evaluation activity, competitive research patterns. Sort of signals that actually correlate with purchase intent.
Time-to-relevant-engagement: How quickly can we get the right message to the right stakeholder based on their actual needs? This would likely replace “time through funnel stages” but would measure speed to value, not speed through arbitrary stages.
Account readiness scores: Dynamic assessment of buying signals across the entire account, not individual lead scores. Is there budget movement? Are multiple stakeholders engaging? Is there technical evaluation activity? This would tell reps which accounts to prioritize right now.
Pipeline quality indicators: Predictive close probability based on engagement patterns, relationship depth, and historical win rates for similar accounts - not which arbitrary stage they’re in. An account in “discovery” with strong multi-stakeholder engagement and clear budget might have higher close probability than an account in “proposal” with a single champion and unclear timeline.
Revenue per account engaged: Sort of ARPU (Avg revenue per user) but for each seller interaction: how much pipeline and revenue are reps generating relative to the accounts they’re working, AI-enabled interactions they are making? This would be a proxy to SQL-to-opportunity conversion but would measure actual revenue efficiency.
Context accuracy: How well do our systems actually understand each account’s situation? Best measured by sales feedback on AI-generated account insights, accuracy of next-best-action recommendations, and relevance of automated outreach. Over time, this can evolve to more sophisticated A/B testing with reinforcement learning from sales feedback.
Relationship velocity: How quickly are we deepening relationships and moving toward purchase decisions? Measured by stakeholder engagement expansion, technical validation progress, and commercial discussion advancement - not movement between stages.
Revenue influenced by orchestration: What percentage of closed deals had meaningful AI-driven engagement that advanced the relationship? This becomes the ROI metric for the entire system.
So, while traditional marketing would report “generated 500 MQAs this month, 20% converted to SQL/A.” The new approach would (very simplistically) report -
“Engaged 200 target accounts, buying committee coverage up 5% MoM, achieved meaningful stakeholder conversations with 45 accounts (22.5% interaction-to-meeting rate), identified 12 accounts showing strong intent signals (technical evaluation + multi-stakeholder engagement), sales is actively working 8 of those with average context completeness of 85% and 72% context accuracy”
From the Marketing analytics and RevOps folks, the job should become more like managing a trading algorithm than helping teams run campaigns - they should be setting parameters, monitoring performance across thousands of concurrent threads, and optimizing based on signals that actually predict revenue rather than executing pre-defined sequences and counting form fills. And no more lead scoring please.
What will stay the same is that buyers still need progressive trust-building, evaluation time, and consensus development. Complex B2B purchases won't suddenly become impulse decisions, even though we are seeing the buyer journey collapse into fewer screens and touchpoints. Enterprise software deals will still take months because you're coordinating across procurement, IT, security, finance, and business stakeholders - AI doesn't eliminate that complexity yet. We'll likely need agent-to-agent interactions between vendors and buyers for deal cycles to compress dramatically, and that's probably a 2028 story. For 2026, let's focus on getting the funnel metrics right while everyone else waits for AGI.


Nice Read !!