We aimed improve the search experience after talking to customers, conducting moderated tests, and uncovering points of friction during the search process.
Fair was a unicorn startup that aimed to disrupt the car ownership model by providing a platform that allowed you to take out short-term leases on cars. When I started working there, Fair had been relying heavily on quantitative data that showed where users were falling off in the user journey and on reviews on their social media platforms but they didn’t know how to tackle their usability issues from a UX perspective.
So I began researching Google’s HEART framework with the UX lead to understand how we could evaluate usability in a more accurate, efficient, and holistic way. For example, it’s not enough to know how many people are downloading or visiting the app, because they might not be converting to paying customers or having the most optimal experience. A user who’s frustrated with finding a car might be active for a longer time on the app, increasing engagement rates, but not checking out, prolonging conversion.
Using this framework, we established our high-level goal for the redesign based on our business goals and the goal of our users:“Helping people seamlessly find a car that they’d be happy with”), broke it down into sub-goals based on metrics like happiness, engagement, adoption, retention, and task success, and then established metrics for measuring the success of subsequent redesigns based on these goals.
Success Metrics I’ll be focusing on:
Increase in conversion and task completion
Time of task, time
With our established goals and metrics, I conducted user interviews with a cohort of approximately 25 people who had just gotten a car with Fair to capture users’ opinions while their experiences were fresh. We learned that many of their complaints centered around the search and checkout process on the app. Side project, I also created cards during this time and presented them to the Product team to prompt discussions around the solutions we should prioritize next.
You can see the full deck here.
This is how the Fair search experience looked like when I worked there. Using these screens, we conducted moderated usability tests to understand the app’s usability issues more in depth. We gave each participant a list of tasks and discovered key usability issues that resulted in people failing to complete the checkout flow. For the sake of brevity, I’ll focus only on a set number of problems and solutions.
Shoppers struggled to find specific builds. They failed to use “Newest Cars” and “Lowest Prices” filter tabs as intended
They misused the Search bar because they didn’t understand its constraints. They tried to use it the way they would use Google. A few participants with limited vision failed to find the Search bar in the first place.
People failed to understand if the cars were in their location and didn’t have confidence in the Location filter.
Customers were driven to purchase a lease with promotions but didn’t know how to find them and were constantly flooding the customer service center, which was a big deal because customer service specialists also had to serve customers who needed immediate roadside assistance.
Here is the original design that we tested and improved:
I redesigned the search/landing page, placing a gray background behind the search bar and giving it a bevel. I added text on the search bar to establish constraints. I created a bottom navigation bar with icons that opened a Favorites page, so users could easily recall and compare cars– hopefully reducing their browsing time and drop-off, a Promotions page to reduce customer service calls as well as speed users through the car selection and checkout process, and a Notifications page that would alert them to new additions to the car inventory, as well as payment alerts and so on.
The header would disappear and be replaced by a text description of the range the slider is indicating so that users can be very specific with their range, giving them more clarity and control over the process.
Users lacked confidence in the location selector because their feed showed them cars in different cities, which was because of a lack of inventory and not a bug. People often didn’t notice the location selector at the top at all and it disappeared once they started a search. To make location selection more deliberate, I created a flow where users would open a page with input fields for the search term and location when they tapped on the search bar.
They’re also able to see their recent searches and recently viewed cars, which the previous design was missing. This also helps the many customers who get cold feet during the checkout process and decide to take a look at the inventory one more time before committing, only to lose the original car they wanted. Plenty of users forget to “Favorite” cars while they go through the process.
Most participants didn’t open the Newest Cars tab. The tabs were only useful when the car they were looking for was already in the “Lowest Prices” tab. If it wasn’t, they didn’t tap on the “Newest Cars” tab or try to look further.
Most participants thought these tabs were sorting, not filtering, the results. They expected each tab to contain the number of results shown next to the search term (i.e., they’d be able to see the same 250 results in each tab sorted differently.)
As a result, they were confused to see fewer results than they expected and wondered if they were supposed to find a second page.
I presented my findings and design suggestions to the Product team and it was met with enthusiastic approval. A few weeks later, however, Fair’s largest investor, SoftBank, pulled their money because of the WeWork disaster and restructured the company, laying off the design team and shutting down the app. If this didn’t happen, however, I would have loved to A/B test the designs to iterate and improve them and then work with developers to implement them.
Still, I learned valuable fast and efficient guerrilla research techniques during this process that helped me realize that designers were better at understanding the users in a more nuanced way if they directly participated in the research and talked to customers. I also learned how to use qualitative data and design to change minds, including gently pushing back on C-level decisions.