Scaling Experimentation Into Revenue

Overview
Over the past several years, I worked within Myer’s centralised Optimisation team, operating as part of a structured experimentation program powered by Dynamic Yield. The program functioned as a growth engine across e-commerce, driving conversion rate uplift, personalisation, and measurable commercial outcomes.
The FY25 Executive Report shows the scale of the program, delivering $42.2m in 12-month projected revenue across 45 tests, exceeding the $38.1m annual target (111% of target achieved)
This wasn’t isolated A/B testing. It was a systematic, commercially accountable optimisation program.
Problem Framing
As the program matured, the challenge shifted.
Early “quick wins” had largely been captured. Future uplift required:
-
Smarter prioritisation
-
Stronger governance
-
More complex experimentation
-
Cross-team alignment
-
Increased personalisation sophistication
At the same time, velocity was impacted by developer resourcing constraints, requiring tighter focus on high-value tests.
The question became:
How do we sustain meaningful revenue uplift while increasing program maturity?
My Role
Within this multi-year program, I operated at the intersection of:
-
UX strategy
-
Hypothesis-driven experimentation
-
Personalisation design
-
Workshop facilitation
-
Governance alignment
-
Cross-functional collaboration
I contributed not only to test design, but to shaping how experimentation was prioritised, structured, and scaled.
The impact extended beyond individual tests, into building a disciplined experimentation culture that balanced commercial growth with user experience integrity.
Phase 1: Foundations (FY22-23)
In FY22-23, the experimentation program was still maturing.
The program target was $11.4m, delivering $10.9m across 7 active months. The focus during this period was not only revenue generation, but building:
-
Measurement frameworks
-
Ramp governance
-
Significance standards
-
Code review processes
-
Cross-team enablement
The team implemented a defined measurement framework covering primary/secondary metrics, sample size estimation, significance testing, and revenue estimation alignment with Finance.
My contribution during this phase included shaping UX-driven hypotheses, contributing to early test ideation, and working within a disciplined ramp framework that prioritised low-risk rollouts.
This phase built the experimentation muscle.
Phase 2: Scaling & Maturity (FY24)
By FY24, the program had moved beyond foundational capability and into scaled commercial impact.
The revised annual revenue target of $48.4m was exceeded, delivering $49.4m in Q3 and $50.7m by Q4.
Performance improvements included:
-
16.7% increase vs FY23 performance
-
15.8% reduction in cost per test
-
~$27.7m in lost year-1 revenue mitigated by not committing underperforming tests
This is where the program shifted from “running tests” to operating with commercial discipline.
Key advancements included:
-
Deep dive journey-based ideation cycles
-
Integration of Contentsquare for faster insight velocity
-
GA4 transition and data integration uplift
-
Dynamic Yield algorithm sophistication for personalisation
My role increasingly focused on hypothesis framing, UX-led experimentation, behavioural insight integration, and ensuring test concepts aligned with both commercial and user intent metrics.
Phase 3: Commercial Discipline & Governance (FY25)
FY25 marked a governance inflection point.
45 tests were delivered, generating $42.2m in 12-month projected revenue against a $38.1m target, achieving 111% of annual target.
Performance maturity included:
-
22.2% increase vs FY24 program
-
25% reduction in cost per test
-
~$23.7m in lost revenue mitigated by not committing poor-performing tests
The team implemented refined prioritisation combining customer value, commercial impact, and capability scoring.
AI experimentation began with Shopping Muse, testing conversational product discovery. Despite strong engagement among high-intent users, overall adoption remained <1%, and scaling was responsibly halted.
This demonstrated evidence-based decision-making over hype-driven rollout.
I contributed through:
-
Designing urgency and social proof tests (e.g., PDP countdown timers, Top Rated tags)
-
Personalised homepage category row testing
-
Recommendation placement experimentation
-
Workshop facilitation for ideation alignment
Phase 4: Centre of Excellence Vision (FY26)
By FY26, the ambition expanded beyond optimisation velocity into operating model maturity.
In H1 FY26:
-
14 tests delivered $25.1m in ramped revenue against a $30.2m target
-
57% experiment success rate
-
~$3.7m in lost revenue mitigated
The program formally defined a Centre of Excellence vision focused on:
-
Standardised rituals
-
Governance checkpoints
-
Centralised validation
-
Shared documentation
-
Upskilling across Digital
This marked a transition from centralised execution to hybrid decentralised enablement.
The optimisation function evolved into:
A commercial validation layer embedded across the digital value chain.
Personalisation & AI Integration
Across FY24–FY26, the program increasingly leveraged:
-
Dynamic Yield ML-driven recommendation engines
-
Mastercard data integration
-
Segment-specific personalisation
-
Journey-based algorithm testing
These initiatives focused on:
-
Combatting decision fatigue
-
Enhancing inspiration
-
Purposeful friction to improve CVR and AOV
The strategic direction aligned experimentation with Myer’s North Star of $1.5–2B+ online sales.

Latest Experimentation Themes (FY25–FY26)
As the program matured, experimentation moved beyond incremental UX tweaks and into higher-leverage behavioural and personalisation strategies aligned with Myer’s commercial North Star.
1. Combatting Decision Fatigue
A key pillar of the FY25 strategy focused on reducing cognitive overload across the shopping journey.
This included:
-
Surfacing “Top Rated” indicators on PLPs to strengthen social proof
-
Introducing urgency messaging such as PDP countdown timers
-
Optimising product recommendation placements to guide faster decision-making
These tests demonstrated that behavioural cues, not just layout changes, materially influenced CVR and ATB.
The program increasingly leaned into psychology-informed experimentation.
2. Data-Driven Personalisation
Significant investment was made in maximising the value of Dynamic Yield and Mastercard-powered recommendation engines.
Recent initiatives focused on:
-
Personalised homepage category rows
-
Segment-specific algorithm optimisation
-
Testing “Visually Similar” and affinity-based recommendations
-
Search-triggered trending product suggestions
The emphasis shifted from generic optimisation to intent-driven personalisation, delivering the right products at the right time based on behavioural signals.
3. AI Exploration & Responsible Scaling
The team piloted Shopping Muse, a generative AI conversational shopping assistant.
While engaged users showed higher CVR and AOV, overall adoption remained below 1%, and commercial impact was neutral to slightly negative.
Rather than scaling prematurely, the recommendation was to reassess placement, value proposition, and integration into natural behaviours before further investment.
This reinforced a key principle of the program:
Evidence over trend-driven rollout.
4. Governance & CoE Maturity
In FY26, the experimentation function formally articulated a Centre of Excellence vision.
The focus expanded beyond running tests to:
-
Standardising rituals and documentation
-
Centralising validation checkpoints
-
Reducing test conflict through scheduling
-
Upskilling broader digital teams
-
Supporting decentralised execution with governance oversight
This marked a transition from centralised execution to scalable experimentation enablement.
Aggregate Impact (FY22–FY26)
Over four years, the experimentation program:
• Delivered well over $100m in projected incremental revenue
• Consistently exceeded revised revenue targets
• Reduced cost per test as maturity increased
• Mitigated tens of millions in negative revenue impact
• Increased experiment complexity and sophistication
• Embedded personalisation as a core commercial lever
• Transitioned toward a Centre of Excellence operating model
This wasn’t isolated optimisation work. It was building and maturing a revenue-driving system