My client organization runs high-impact sales training programs for financial advisors.
The platform had grown organically over several years, supporting:
- A large, enterprise client
- A new, high-stakes enterprise opportunity
- A smaller B2C business
The existing LMS stack, built on WordPress and multiple plugins, was operational but fragile, difficult to extend, and increasingly risky as the business shifted toward larger B2B contracts.
My role
I was hired as a senior product manager. My role is to define the tech and product strategy while being a hands-on operator. The client does not have technical employees.
Why it mattered now
A new B2B client represented a near-term enterprise delivery with fixed deadlines and high visibility. Failure was not an option, and it would risk credibility, renewal potential, and future enterprise growth.
At the same time, another B2B client was already experiencing reporting gaps, operational friction, and scalability limits that directly affected retention and leadership trust.
Constraints
- We had fixed deadlines to start testing with real users and go live with a better platform
- Budget constraints with strong sensitivity to per-seat pricing
- No in-house engineering team
- Legacy data and content that could not be lost
- Leaders needed real-time visibility, not monthly spreadsheets
- Zero tolerance for disruption to existing clients during transition
Planning approach, and deliverables
This work was not a linear “build an LMS” project. It was a high-ambiguity product discovery and decision program across business, technology, operations, and growth.
My planning approach followed one core principle: Reduce irreversible risk before committing to execution.
To do that, I deliberately sequenced frameworks and deliverables that progressively turned ambiguity into decision-ready inputs.
Business-first product framing
Framework used: Business outcomes → product capabilities → tools
Before touching tools or vendors, I anchored the work in business outcomes, not features.
Deliverables
– Business context brief
– Client segmentation
– Revenue and retention drivers by client type
– Phase-based prioritization (Phase 1 vs long-term)
Most LMS projects fail because they start with platform demos. I inverted the process to ensure:
- Product decisions mapped to retention and contract renewal
- Scope stayed aligned with deadlines and budget
- “Nice-to-have” features did not derail Phase 1 delivery
This created a shared understanding that Phase 1 success was credibility, reliability, and leadership trust.
Untangling the problem
Before proposing solutions, the biggest challenge was not choosing a platform. It was understanding what problem we were actually solving.
The system had evolved over the years, across multiple clients, tools, and priorities. Knowledge was fragmented across people, documents, and workflows. To move forward safely, I first had to untangle four overlapping layers.
At the surface, “the LMS” looked like a single product. In reality, it was supporting very different jobs for separate clients, use cases, and expectations.
Problem framing and hypotheses
The real problem was: How do we deliver a premium, enterprise-grade learning experience that scales across multiple clients, without rebuilding the company around software?
Framework used: “What problem are we solving now vs later?”
I explicitly separated:
- Problems we must solve to go live
- Problems that can wait without risking the contract
- Problems that should not be solved in this system at all
Deliverables
– Problem statements per client
– Explicit out-of-scope decisions
– Phases capability map
Examples
Sequential cohort delivery: in scope
Advanced AI coaching feedback: out of scope
Unified B2B + B2C platform: explicitly deferred
This prevented scope creep and made trade-offs visible. It also reduced decision fatigue for stakeholders, who could react to clear options instead of abstract ideas.
Initial hypotheses
- A “good enough” platform that supports multi-tenant B2B + reporting would outperform a perfect custom build under these constraints.
- Most LMS require multiple conversations to understand pricing and the implementation timeline
- Reporting and admin UX, not content quality, were the real churn drivers.
- The current WordPress stack was beyond its sustainable complexity threshold.
Discovery and inputs
Framework used: Systems thinking + dependency mapping
A major part of the work was untangling a legacy system that had grown organically over years.
Signals used
- Deep 1:1 interviews with the founder (business strategy, contracts, risk tolerance), program operations lead (day-to-day pain, manual work, and platform architect (technical debt, integration fragility)
- Live platform walkthroughs
- Historical reporting artifacts (manual CSVs, spreadsheets)
- Vendor demos and pricing conversations
- User journey mapping for learners, leaders, and admins
Mapping where work was happening manually vs systemically
Many issues were initially described as “tool limitations.” Through interviews and walkthroughs, I identified that a significant portion of the system’s reliability came from human compensation, including:
- Manual CSV exports and spreadsheet manipulation
- Manual user cleanup and access management
- Manual reconciliation between CRM, LMS, and reporting tools
- Operational knowledge living in people’s heads, not documentation
Some “features” already existed, just not in software
Untangling this helped distinguish what truly required platform capabilities, what was an operational or process gap, and what could be simplified rather than rebuilt
What I observed
- The platform worked because people compensated for it, not because it was well designed.
- Reporting delays directly impacted leadership confidence.
- Adding a new enterprise client would mean more “patchwork,” not scale.
- Admin workflows were a hidden bottleneck.
- Vendor pricing opacity was itself a strategic risk.
Distinguishing historical decisions from current constraints
Many architectural choices were rational at the time they were made.
Part of the untangling work was explicitly separating: “Why this made sense in 2018” from “What it costs us in 2025”
This avoided blame-based discussions and enabled forward-looking decisions.
It also helped leadership align on a shared truth: The current system is not broken, but it is past its safe operating range.
Strategy and decision-making
Options I considered
- Patch and stabilize the existing stack
- Custom-build a new LMS
- Enterprise SaaS LMS
- Modern startup LMS
- Open-source enterprise LMS with a partner
Key trade-offs
- Speed vs control
- Cost predictability vs flexibility
- Implementation risk vs long-term ownership
- Admin experience vs feature depth
Decision logic
- Custom build required a long-term maintenance risk and single-point-of-failure dynamics.
- Enterprise SaaS had cost curves and implementation timelines.
- WordPress patching had a compounding technical debt.
The strategy became to select an LMS that supports our 3 use cases, preserving optionality for long-term architecture while making sure the vendor has the functionality that is the most important to us.
Framework used: Option comparison across four dimensions
Instead of a binary “build or buy,” I evaluated platforms against:
– Multi-tenant complexity
– Reporting depth and credibility
– Admin operational load
– Total cost of ownership over time
Deliverables
– LMS landscape analysis
– Vendor category segmentation
– Feature matrices (Yes / No / Maybe)
– Pricing model analysis (linear vs banded)
This avoided two common traps: overbuilding too early and buying an enterprise tool that locks in unsustainable costs.
Execution and systems
What shipped first
- A unified Project Strategy document as a single source of truth
- A detailed platform audit documenting current risks and costs
- A user journey map covering four distinct use cases
- A structured LMS evaluation matrix covering key aspects for the client
How work was organized
- Weekly executive updates to maintain trust without meetings
- Clear phase boundaries (Discovery → Decision → Implementation)
- Parallel tracks:
- LMS landscape research
- Enterprise use case definition
- Timeline and dependency mapping
- LMS landscape research
Early outcomes
- Shared language across leadership around “what good looks like”
- Reduced uncertainty around cost drivers and risk areas
- Clear elimination of multiple LMS options before implementation
- A realistic timeline aligned with business deadlines
Continuous alignment and trust-building
To avoid progress being invisible, I set up:
– Weekly written digests
– Living documents
– Shared artifacts instead of slide decks
Deliverables
– Centralized strategy and research docs
– Decision logs
This reduced meetings, increased confidence, and kept stakeholders aligned without micromanagement.
Temporary conclusions
I have done all this work in just a month and a half. I’m excited to see how we are going to end Phase 1.
What I would do again
– Start with constraints, not solutions
– Treat vendor conversations as discovery, not evaluation
– Make decision logic explicit and visible
– Separate “Phase 1 success” from “perfect architecture”
What this case reinforced
- Product strategy at senior levels is about risk management and sequencing
- Good systems reduce the need for heroics.
- The best decisions preserve future optionality.
