Optimising shopping cart experience

Keeping customer insights flowing with quantitative evaluation

I was asked which of two shopping cart designs was more likely to meet user needs. A live test wasn’t practical, so I needed data to inform a high-risk decision.

The challenge

  • Inform an urgent decision with customer preferences
  • Evaluate usability of two shopping cart design options
  • Assess how usability and journeys were impacted
  • Measure preferences and report on significant inferences

I proposed a robust and achievable research plan to support a redesign of a shopping cart. When the need arose for unplanned research, I needed a way to get more customer insights and keep the rest of my schedule intact.

Urgent unplanned work is not ideal, but when new discoveries raised important questions, I needed to adapt my plan. Although qualitative activities could have given us more depth, they would have disrupted my scheduled activities, inconvenienced participants and exceeded my budget. Quantitative, unmoderated research was an achievable and effective way to get a ‘quick read’ of categorical preferences.

As part of a broader research program that included multiple research methods, planning a contingency for quantitative design evaluation got the best outcome for the project.

The question, “Which of these two design concepts will we develop?”, could have been researched using multiple methods but the question was one of probability. Which design, A or B, was most likely to meet user needs?

The problem to solve

  • Design axioms (rules of thumb) and customer feedback indicated two potential design solutions (A or B)
  • Both designs (A and B) followed reasonable practice and weaknesses were not immediately obvious
  • Axioms for simplicity and affordance when taken to extremes, appeared to conflict
  • Preferences were important and experience (not price alone) was driving channel value

Approach

  • Participants used a mobile prototype to perform tasks and answer questions on their own device
  • 300 participants were split into two balanced cells with similar characteristics, and each cell rated one design
  • Non-parametric tests found statistically significant differences in how the designs were rated
  • Results were shared quickly, followed by a report and a solutions workshop

Key takeaways

Lessons learned from what worked well with this approach:

  • Planning contingency allowed me to accommodate new discoveries and support innovation
  • Quantitative unmoderated usability testing gave insight on journeys, not just interfaces
  • Statistical tests identified statistically significant customer preferences
  • Results gave the project confidence and the direction they needed to support decisions

Report

The report drew conclusions by analysing closed-question responses (what responses participants chose) with sentiment analysis of free-text responses (why they chose that), and a statistical test on how customers rated each design (A or B).

Non-parametric tests were used to report on statistical significance of preferences. This analysis was the first step of an ongoing optimisation experimentation program, and the methods evolved after live experimentation capability was deployed.

It’s important to note that although analysis revealed a statistically significant result by comparing 2 variations, and did so at the early stages of design before development, that this experiment wasn’t concluded until an uplift was observed in production.