why is product-led-growth getting harder?
PLG presents a unique challenge—the last 20% is the hardest to crack, and it’s constantly shifting.
In the nearly 10 months I’ve spent driving Twilio’s self-service growth strategy and lifecycle programs, one thing has become clear: the first 80% of a product-led growth (PLG) model is achievable. But it’s the last 20%—the leap from “good” to “great”—that’s incredibly arduous. The gap feels vast, and while there’s plenty of advice out there, what often stands out are the things that don’t work and the challenges faced by enterprises—and these patterns are surprisingly consistent across different companies.
Speeding Up the Future: Low-Code, No-Code Explosion
Customer preferences have shifted drastically, and the pace is only accelerating. Over the last 18 months, we’ve witnessed a significant transition toward low-code and no-code platforms—both as consumers and operators. Analysts and leading research firms are forecasting that by 2025, over 70% of new apps will be built using these platforms, compared to just 25% in 2020. While I may not fully align with the precision of the forecast, I can definitely vouch for its general trajectory. This shift has altered how products are both built and consumed.
Startups and unicorns Aren’t All Rainbows: The Fake Signup Dilemma
However, with this evolution come new dilemmas. LLMs and abstracted product layers over these platforms have made rapid prototyping feasible—even for product managers without technical backgrounds. I wouldn’t be surprised if we see 10 solopreneur unicorns within the next year. Nevertheless, there’s a downside to this democratization. Fake signups have surged as users exploit sign-up flows to access freebies or inflate engagement. In 2023 alone, fraudulent signups increased by 25%, inflating top-of-funnel metrics and creating a misleading sense of growth. Sure, activations might spike, but if those users aren’t advancing through meaningful milestones, it’s not genuine progress. It’s a delicate balance because while there’s ample advice on mitigating fake signups, few companies have truly perfected it at scale.
Copy-Paste Doesn’t Work Here
I’ve encountered numerous cases where companies replicate what worked elsewhere and end up worse off. Without the proper infrastructure to evaluate what’s genuinely effective, most businesses fall into one of two categories: they either adopt off-the-shelf solutions or mimic what industry leaders are doing—often with limited success. This blind replication does more harm than good, and I’ve witnessed this firsthand. Many companies ask, “Does real-time bot detection work? Does double opt-in work? Do honeypots or rate-limiting work?” The question isn’t whether these tactics work—because if they didn’t, you wouldn’t see new startups offering those as a service—but rather how you tailor them to your specific context.
Developers: Rapid Adoption Meets Cautious Skepticism
Meanwhile, developers are evolving just as rapidly. The 2024 Stack Overflow survey revealed that 76% of developers are using or planning to use AI tools, yet 45% still don’t trust these tools for complex tasks. This combination of rapid adoption and underlying skepticism presents a unique challenge for product teams. Every decision—whether around onboarding, feature sets, or metrics—becomes more intricate.
Ranking for Voice Search, and then visual and then gesture, and then what?
It’s not just how customers consume products that’s changing, but how they search for them. ChatGPT, Perplexity, Claude and their cousins are delivering direct answers, drastically reducing the need for users to visit websites. Voice-enabled search is set to accelerate this trend even further. There’s skepticism about the accuracy and timeliness of these results, but the technology is advancing swiftly. Gartner projects a 25% reduction in search engine traffic by 2026, driven by the rise of chatbots and virtual assistants.
For companies that depend on SEO, this is a substantial challenge. AI-generated content is flooding search engines, pushing down high-quality, carefully optimized material. Additionally, search algorithms are continuously evolving, making it increasingly difficult to stay visible. Before we know it, how one can rank on voice-based search will likely emerge as the next major problem everyone will be scrambling to solve next year.
Tool Hopping and Churn: Breaking Loyalty, One Task at a Time
This shift is also disrupting the traditional "Jobs to Be Done" framework. Simple tasks like writing a blog post now involve multiple tools: recording voice notes, sending them to AI editors, and then transferring the content back to traditional platforms. With so many tools involved, it’s hard to build loyalty to just one. A recent survey found that 48% of users switch tools because they believe another offers a marginally better feature. For companies with usage-based pricing models, this kind of fragmentation makes predicting churn and driving retention even more arduous. If you’re operating without a subscription model or tiered plans, defining churn in the coming months will be exceedingly challenging. Without clear user signals, constructing a coherent growth strategy is often a guessing game.
The Data Nightmare: Privacy Regulations Are Killing Your Metrics
Further compounding this complexity are privacy regulations like ITP 2.x, Privacy Sandboxes, ETP, CNAME cloaking, APP, etc. Tracking users today is nothing like it was a few years ago. A recent report revealed that 67% of companies are struggling to adapt to stricter privacy regulations. This is a nightmare for PLG businesses that rely on data to fine-tune user journeys. Open rates are already a dead metric, and with new interfaces on the horizon—such as AR glasses and zero-touch interfaces for reading emails or collaborating—how companies track user engagement is going to evolve rapidly.
Experimentation Chaos: When More Isn't Always Better
Product experimentation adds another layer of complexity. Market expectations are pushing product teams to ship more, faster. We’re seeing companies refine their foundational architectures to enhance interoperability with LLM-driven products and meet user expectations. As teams conduct more experiments, maintaining consistent instrumentation across them becomes an increasingly difficult task. Headcount rarely scales in tandem with this complexity, and metrics that were valuable six months ago may already be obsolete. Maintaining long-term data reliability in this environment is formidable. Worse, as companies scale, KPIs become fragmented. Different teams track different metrics, and soon you’re managing a mix of time-bound and milestone-based KPIs. Without aligning these or achieving clarity on the North Star, it’s hard to discern what’s genuinely driving growth. For example, you might observe a decline in 7-day activations but a rise in paid upgrades. Without connecting these insights, the data gets lost—and by the time you figure it out, the product might have already evolved, and so too would your customers.
In times of rapid change, silos can form across teams. That’s not always detrimental—progress over perfection, right?—but it makes alignment more challenging. Without a clear forcing function within the organization, getting everyone on the same page can feel insurmountable.
These are just a few of the challenges I’ve faced firsthand, either dealing with them or hearing about them from my Growth peers at other companies. In my next post, I’ll delve into some best practices and how companies are navigating these trends. If any of these issues resonate, or if you think I’ve missed something, I’d love to chat—feel free to reach out anytime.
Sources
Will Search Engine Traffic Really Drop 25% by 2026, As Gartner Predicts?
Where developers feel AI coding tools are working—and where they’re missing the mark
Gartner Magic Quadrant for Enterprise Low-Code Application Platforms
From experimentation to value creation: How to navigate the generative AI journey
AI's Impact On The Future Of Consumer Behavior And Expectations
Forget "No Code." Adios "Low Code." Say hello to "Yes Code!"
Because the only thing worse than building internal tools is maintaining them
Will low and no code tools ever truly disrupt tech development?
Community notes from Low-code and no-code at Gartner Application Innovation & Business Solutions Summit