Exploratory research combined with evaluative research @ Shopkick

Research Methods: competitive analysis, usability testing, data analysis

Tools Used: User Interviews, User Testing

FTUE Onboarding


Overview

Shopkick is an app that gives you rewards for shopping in stores and online. You earn points called "kicks" and you can use them to get free gift cards. You can earn kicks by walking into certain stores, scanning specific products, or making purchases. Shopkick aims to make shopping more enjoyable and rewarding for consumers. It wants to change how people interact with stores and products, both online and offline.

My Role

  • User researcher team-of-one

  • Vision and goal setting cross-functionally

  • Implemented research guidelines and processes to streamline research-incorporated projects

  • Led the initial discovery & strategic research, held cross-functional brainstorming sessions, roadmapped and executed the research plan, effectively communicated feedback from users with next step recommendations  (Qualitative Data)

  • Analyzed data with Data Engineers and Product Managers to determine next steps (Quantitative Data)

The Problem

Shopkick is flatlining on engaged users. 50,000 new users install the app every week, of which 8,300 of them activate. A month later, we are left with ~2,000 users. Some would call this a "leaking bucket". A few issues we hypothesized for this through user research are:

  • No clarity on the value proposition. How does this make our user's lives better? What pain are we solving?

  • Not enough kick earning opportunities

  • No variability in inventory

  • Not a personalized experience. Users were seeing products they would never buy.

  • Users would not remember to open the app when an action was required. The product did not create a strong internal trigger.

However, one of the biggest issues new users face is the learning curve for the app - they have a hard time understanding how it works. Our on-boarding screens do no justice in gaining the user’s trust and effectively explaining what they are signing up for. And once they are in the app, there is little guidance on how to interact with our features. Users are lost and unsure of what their first step needs to be. 


Research Goal

The questions potential users have to answer before signing up are not organized well. It takes about 23 minutes for users to finish them. These questions cover various topics like psychology, health, habits, preferences, and motivation. However, the structure and purpose of these questions are not clear. Additionally, the length of the survey is causing fatigue, which might be leading to users dropping off during the sign-up process.

We conducted a research to find out what people consider important or unnecessary information before buying a weight loss program. We also wanted to see if there was a preferred order in which participants responded to these questions. We selected topics for this study by reviewing our current questions for purchasing a weight loss program, but our aim was not to evaluate Noom's current buying process. During the sessions, we made an effort to not disclose the company we work with in order to minimize bias.

To better understand what potential users want before signing up, we will focus research resources. This will give us insights to reorganize the buyflow and achieve other business goals. These goals include increasing eLTV13/CAC (Customer Lifetime Value to Customer Acquisition Cost ratio) and improving collaboration between teams to speed up development and minimize errors caused by rapid experimentation.

Link to beginning state of buyflow.


Collaboration

Bringing in Data

Using our data team to analyze the drop-off rate in each screen was important. I wanted to identify where users face difficulties in the process, so we can understand the reasons and find solutions.

The team shared the user completion rates for each screen. We noticed a decrease at the "gender and age" screen. I thought this could be because new users are unsure about why we need this information and how we will use it. We further investigated and tested this hypothesis during the research.

Brainstorming Session

The issues mentioned were not new to the Shopkick team. I wanted to use their knowledge and explore what we already know. What assumptions does our team have? To do this, I organized a brainstorming session to gather their thoughts on the on-boarding screens, their feedback, and any possible solutions. The purpose of the collaboration exercises I led was to share knowledge about what our existing and non-users prefer, clearly define what we believe is important for success, and possibly lay the groundwork for long-term concepts. Additionally, involving stakeholders in the research process raises awareness about it.

The team brainstormed current on-boarding pain points they believed the user was experiencing, as well as voted on ideas based on internal knowledge and the highest rated ideas.


Process

Competitive Analysis 

Before starting new designs, I thoroughly examined our competitors' on-boarding process and studied how other complex apps explained their FTUE (First Time User Experience). We discovered that many apps faced similar challenges like ours, but it was valuable to observe how they used step-by-step guides to highlight various features and guide new users in utilizing them.

Exploratory Research & Testing

The goal of setting up 1:1 interviews with new users was to help understand their mental model associated with rewards savings apps and their on-boarding/FTUE screens. After I had a grasp of their needs and expectations, I wanted to show them our current layout and get candid feedback on what they understand from it, as well as what they find confusing.

Once we analyzed the data from the interviews, we then shared it with the team and had them participate in converting the findings into How Might We's. This was super effective for the designers to have an opportunity to brainstorm solutions.

A few noticed patterns:

  • Participants showed distrust and discomfort in the more personal questions we were asking earlier in the screens (age, phone number, location, and turning on bluetooth). We weren’t explaining to them what this information was going or being used for, creating a barrier.

  • Participants complained about the length of time it took to finish the screens.

  • Once they completed the on-boarding screens, new participants and current users explained once they landed on the main screen, they were confused on what their “first action” should be.

Our first round of redesigns were meant to focus on resolving these issues. We were able to work with legal to only ask necessary questions until we could establish trust, took out the unnecessary screens (and cut down time to completion from 2 minutes to 45 seconds), and created walkthrough screens to resolve the “first time user experience” confusion.

Final Output

Click here to see final design output.

Previous
Previous

Design Tool Benchmarking

Next
Next

Developer Personas