Accelerating the Path to Purchase via "Recommended" Logic
Simplifying complex carrier pricing for better plan-fit. By adding explicit carrier-size guidance, we reduced decision time by 11 seconds and drove a 16% lift in SSU (Self-Service Signup) starts.
Role
Lead Product Designer
Impact
+16% Revenue Lift, 11-Second Reduction in Time-to-Decision
Project Overview & Impact
To optimize the plan selection process for the "Carrier" segment, I implemented Guided Selling principles. By introducing business-size identifiers into the pricing grid, we eliminated the friction of manual feature comparison. This led to a 16% lift in estimated revenue and significantly accelerated the user’s journey from the pricing page to the checkout.


The Business Problem
Analysis of user behavior via User Interview, Feedback Survey, Hotjar and GA4 revealed a significant "bottleneck" on the primary pricing page.
The Symptom: Users were spending an average of 45+ seconds on the grid, frequently scrolling up and down and hovering over multiple plan tooltips without clicking.
The Diagnosis: Choice Overload. Users (Trucking Owner-Operators vs. Fleet Owners) couldn't immediately identify which plan was built for their specific scale. The cognitive effort required to compare 20+ line items was causing "Analysis Paralysis."
Who we were designing for?

How I diagnosed the problem
I started with users, not assumptions. Three research methods, run in deliberate sequence, each adding a layer the previous couldn't prove.
Method 1: Churn interviews
3–5 churned carriers · semi-structured interviews
Carriers hadn't chosen the wrong product, they'd chosen the wrong plan. Nothing on the page had helped them choose correctly.
Method 2: In-product survey
~1000 respondents · DAT platform
The confusion wasn't an outlier. Most respondents found plan selection difficult and defaulted to "Most Popular", not the plan that matched their operation.
Method 3: Hotjar
Session recordings · scroll depth · time-on-page
Carriers read the full grid with clear intent, but hesitated at one specific row. Average time-on-page before selection: 45 second.

Synthesis:
Interviews told us why.
The survey confirmed how widespread.
Hotjar showed us exactly where.
One row. One fix.
Customer Journey: Emotion Curve & Key Stages
Mapped the emotional experience across 6 stages, before and after the change. Stage 3 is where everything breaks down, and where the design landed.

Stage 3 (Hits "Recommended" row) was the critical friction point, peak confusion for all four carrier types, 45 seconds average hesitation, highest exit risk. The design change targeted this exact moment.
Stage 2
Scans the grid · Partial haulers and new carriers feel the product isn't for them
Stage 3
Hits Recommended row · All 4 personas hit peak confusion here · 45 sec avg.
Stage 6
Early product use · Mismatch surfaces · churn risk highest here
I considered three solutions before recommending one
With the problem clearly diagnosed, I mapped out three possible interventions before making a recommendation:

Ruled out
Full pricing page redesign
High engineering cost, long delivery time, and would change too many variables at once, making it impossible to isolate what actually drove results.
Ruled out
Quiz / plan selector tool
Adds a step before users can see plans. Risks increasing friction rather than reducing it, and introduces a new UX pattern that needs its own testing cycle.
Ruled out
Onboarding overlay
Interrupts the journey before users saw plans. Carriers visiting the pricing page already have intent, stopping them to ask questions felt counterproductive.
Recommended and approved
Carrier-size callouts on the existing "Recommended for" row
Surfaces the right information at the exact moment of confusion, with zero additional steps in the journey. Small engineering footprint, fully reversible, and isolates a single variable — making the A/B test clean and conclusive.
Constraints
Four hard constraints shaped the solution. In hindsight, they didn't limit the design, they forced the precision that made the test conclusive.
⏱ Limited engineering time
Engineering was mid-sprint on a separate initiative. We had a maximum of 2 days of dev time, not enough for a full redesign or a new interactive component like a quiz or overlay.
Impact: Ruled out the full pricing page redesign and the plan selector quiz entirely. Pointed us toward the smallest possible intervention.
🔒 Pricing structure was fixed
We couldn't change the 5 plans, their names, price points, or feature sets within each tier. The solution had to work entirely within the existing product architecture.
Impact: Eliminated any solution involving restructuring the grid. The fix had to be purely additive, layered on top of what existed.
↩ Test had to be fully reversible
Any change to the pricing page needed to be rollback-able within hours if results were negative. Structural changes affecting layout, load time, or SEO were off the table.
Impact: Reinforced the callout approach, a text change to a single row is the lowest-risk, most reversible intervention possible.
5 Had to keep all 5 plans as-is
No consolidation, removal, or renaming of plans was in scope. Business and pricing decisions sat outside the design team's remit for this sprint.
Impact: The solution needed to work across all five columns simultaneously, one callout per plan, consistent in style and length.
2 days engineering time · Pricing structure fixed · All 5 plans unchanged · Change had to be fully reversible
"If a minimal, reversible, 2-day change could move revenue by 16% , the problem was never structural. It was always informational."
Trade-offs
Three solutions ruled out, each for a specific reason, not just because of time.

The reframe, constraints as validation, not limitation
It would be easy to frame these constraints as things that limited what we could do. But looking back, they did the opposite, they forced precision. If a minimal, reversible, 2-day change could move revenue by 16%, it proved the problem was informational, not structural. We didn't need to rebuild the pricing page. We needed to surface the right information at the right moment. The constraints kept us focused on that insight rather than letting us default to a bigger solution that would have been harder to measure and slower to ship.
"The smallest intervention that addresses the root cause is always better than the largest intervention that addresses a symptom. The constraints didn't limit the solution, they validated it."
I recommended it, aligned stakeholders, and defined success upfront
I presented the three options to the PM with a clear recommendation: the callout approach addressed the root cause, fit within engineering constraints, and produced a clean test signal.

The Design Hypothesis (Choice Architecture)
Hypothesis: If we simplify the mental model by adding explicit "Carrier-Type" callouts (e.g., "1-5 Trucks" vs. "Growing Fleets"), then we will reduce the cognitive load, leading to faster confidence in plan selection and higher conversion rates.
The Strategy: Guided Selling
Direct Labeling: I added a "Recommended For" row at the very top of the grid, the primary focal point.
Segment Alignment: Instead of technical jargon, I used business-scale language (e.g., "Standard," "Pro," "Select") paired with fleet-size identifiers.
Visual Anchoring: Highlighted the "best fit" plan for the majority of traffic using a subtle visual lift (shadow and "Most Popular" badge) to provide social proof.

Data Validation & ROI
The experiment was a statistically significant winner, proving that "less thinking" leads to "more buying."
Metric
| Result
|
|---|---|
| Time-on-Page (Decision Speed) | -11 Seconds (Faster) |
| Conversion (Revenue Lift) | +16.03% Increase |
| User Engagement | +9% Increase in "Start Signup" clicks |
| Confidence Score | 98% Statistical Significance |
Strategic Insights: The "Cognitive Load" Factor
Three principles this test validated, applicable beyond this project.
Clarity beats complexity. In a technical product, users don't want to be experts — they want to be seen. Explicit guidance outperforms comprehensive feature lists every time.
Mobile-first decision points matter. On mobile, the "Recommended for" row acted as a navigation shortcut, preventing excessive scrolling and anchoring users to a single relevant plan.
Guided selling scales. This framework is now the blueprint for the Broker and Combo pricing grid redesigns, proving that the pattern, not just the solution, was the finding.

Tools & Methodology
Design: Figma (Systematic Component Design)
Analytics: Hotjar (Session Recordings) & GA4 (Event Tracking)
Experimentation: VWO (A/B Testing)
Psychology Principle: Cognitive Load Reduction, Minimizing the mental effort required to process information.