In search of PMF • Part 2

Doubling our user acquisition

COMPANY

PERIOD

ABOUT RIBON

5 months
dec/23- may/24
Ribon is a social impact startup that facilitates donations to nonprofits through an innovative and accessible model. On the platform, users can donate to social causes at no cost to themselves, using "tickets"- a virtual currency earned by engaging with content.

These tickets are sponsored by philanthropists and companies committed to expanding collective social impact. Ribon generates revenue when users choose to make donations with their own funds after experiencing donations with the virtual currency. In this case, a fee is applied to the additional donation amount, enhancing the platform's potential for contribution and sustainability.

SKILLS

Growth
Prototyping
Hypothesis testing
A/B Testing

MAIN OBJECTIVE

Increase the acquisition rate of native app users, who have proven to be more engaged than web.

WHY?

With a new round of funding approaching, we were looking for metrics to prove Product-Market Fit (PMF). We had achieved a 1% conversion rate (part 1 of this study), but noticed low retention among paying users.

There was a lack of "stickiness" to demonstrate PMF.
At Ribon, user acquisition relied on partnerships. Companies distributed donation tickets in exchange for engagement, such as purchases or survey responses. Once attracted, users could continue on the platform.

Our web flow had two main objectives: converting to paid donations and encouraging native app downloads.

We observed that web users were less engaged and tended to convert only once. As a result, we decided to focus the web flow exclusively on downloads, concentrating conversion efforts on the app, where engagement was significantly higher.

Web for downloads, App for conversion

Weekly use frequency (dec)

Retention of paid donors (acquired in aug)

app
web
app
web
With the full focus on downloads on Ribon Web, we ran experiments to turn it into a download machine.

The growth squad, composed of a growth manager, a developer, and myself as product designer, conducted weekly A/B tests using a collaborative opportunity tree to plan and model each experiment. We were supported by a data engineer to ensure precise tracking and measurement.

Our A/B tests were implemented using GrowthBook, testing a variable group against a control group.

Our main hypotheses centered around:
  • Ticket-based rewards
  • App-exclusive features
  • Sense of urgency
  • CTA impact

Download experiments

Monthly downloads (december)
3,784
Each experiment model included:
  • Hypothesis
  • Primary objective
  • Secondary metrics
  • Timeline
  • Design requirements
  • Development requirements
  • Outcome scenarios: projected results for success, no change, or decline

Some of our experiments

Exclusive nonprofits
Result:
Negative
It didn't increase downloads and caused friction with partners.
Exclusive profile
Result:
Negative
It didn't increase downloads.
Countdown post donation
Result:
Negative
The timer increased downloads slightly, not enough to justify using a deceptive pattern.
Benefits page
Result:
Negative
We created a no-code page to highlight the app's benefits, embedding it as an iframe in our web app and making it accessible via the navbar.

However, it showed no significant impact and added complexity to the user flow, so we decided to remove it.
Download CTA on home post-donation
Result:
Positive
It increased downloads by 50% compared to the control group.

Extra: no development required :)
CTA on the first onboarding screen
Result:
Positive
It increased downloads by 33% compared to the control group.

It decreased our first donation activation rate, but we understood it was a good trade-off.
CTA on the first onboarding screen
Result:
Positive
A simple copy change from "Get app" to "More tickets."

It increased CTA clicks by 200% and downloads by 37% compared to the control group.
old copy
Our main takeaway from the experiments was that users value the donation tickets more than the donation itself.

Attempts to make certain features exclusive to the app or restrict them on the web did not yield significant results, indicating that this was not the ideal approach.

Finally, we underestimated the complexity of tracking the funnel stages. We used UTMs to monitor events, but they caused implementation issues and data inaccuracies. As a solution, we adopted backup strategies, such as using unique links in the app stores (App Store and Play Store) to separate the performance of each test.

Key learnings and results

Acquisition rate evolution

3,784
7,862
dec/23
may/24
we doubled it
In part 1, we achieved a 1% conversion rate. But how did it perform as we increased the acquisition rate and user base?

Spoiler: we managed to maintain the 1% conversion rate within the app. :)

This result is the outcome of 2 design sprints, which also led us to a new strategic shift: focusing on the B2C market.

PMF JOURNEY CONTINUES

And how did our conversion rate turn out?

If you'd like, check out the continuation of this case study

In the search of PMF • Part 3

Ribon App

B2C to B2B in 2 design sprints

App
Discovery
All rights reserved
We used a collaborative opportunity tree (reference: Teresa Torres) to prioritize each experiment, attaching user interview excerpts, quotes, Mixpanel analyses, and Hotjar recordings to support each branch and hypothesis. Assessing complexity and potential impact was key to determining which experiments we would run.

The opportunity tree

How our figjam looked like (Security brench)

Bruna
Growth manager
Me!
Product Designer
Leod
Developer
Leo
Data Engineer
Made on
Tilda