A little over a year ago, we were overhauling our metrics, and were looking for a practice for measuring overall happiness, primarily with our product. There are straightforward quantitative metrics you can leverage and track, which we do, such as conversion rates, churn, and ACV. But we wanted a signal that would give us a leading indicator as to how successful we were overall. We happened on Net Promoter Score, which surprisingly few people talk about (Eric Ries wrote a good post). There still isn’t that much written about how to tactically implement NPS in a startup, as well as lessons you may learn going forward. Here is what we have learned and practiced.
Rolling Your Own vs Existing Product
Aside from existing survey tools like SurveyMonkey, at the time we couldn’t find a pre-baked NPS tool (there’s now CustomerVille, Promoter.io, Delighted, and others). We decided to roll our own. It’s pretty simple to implement, was directly integrated into the customer experience, and then we have direct access to the data.
How It Looks
Here’s how we designed ours to look – it’s a modal that pops up when people open up Contactually. You can grab the stars off of FontAwesome. People click on the appropriate star or they click “Not Right Now” – it’s important to have an exit option. We didn’t for a short time, and saw that if people were forced to make a decision, they would get pissed off and make a low ranking, solely because they were being bothered.
Now those familiar with NPS may notice that there is a discrepancy between what the “proper” implementation of the survey. We have 10 stars to choose from, but ideally you’d have 11 options, with ability to give a score of 0 as an option. Given how the resulting score is calculated, we decided to stick with our survey choices, and would see little difference.
We also ask a followup question, which has proved immensely valuable.
When to ask
There doesn’t seem to be any best practice here, so this is what we came up with. Given we’re a SaaS product, there’s a customer lifecycle to be mindful of, and we wanted to track that. Our system flags people to receive the NPS survey, and the next time they log in, they’ll be presented with the questions. We ask all users on the 30th day after signing up, and then we also ask every 90 days after we last asked them, repeating in perpetuity. We’re pretty happy with this pattern.
Measuring the data
This is pretty straightforward with the NPS survey. Keeping in mind my note above re: 10 options instead of 11, this is a bit of Ruby code.
in_app_ratings = in_app_ratings.group(:value).count negatives = (0..5).sum { |i| in_app_ratings[i].to_i } positives = (8..9).sum { |i| in_app_ratings[i].to_i } (positives.to_f - negatives.to_f) / in_app_ratings.values.sum * 100.0
Now, the fun comes in when you decide which survey results you want to look at. Our standard is to look at results on a rolling 30 basis, but we can sometimes go right down to the week if we want something more granular (day-by-day is too spiky).
The valuable thing that you can do now is start separating out results by user cohort or common characteristics. We measure and track separately new users, paid customers, and all users. You can imagine how those translate into different parts of your business – happier new users means higher conversion rates, while happier paid customers reduces churn.
Reporting and leveraging the data
We look at the data weekly as part of our weekly metric ritual. However we also have the daily raw survey results emailed out to the team automatically, as part of our larger automated nightly stat email (more on that in another post). Seeing customers who express their love for your product when you first open your email in the morning gives you a little bit of extra dopamine, and then our customer success team reaches out to anyone with a negative score to see what’s going on. Overall, it helps establish some baseline bridge and empathy between the customers and our entire team.
With access to the full database of results, NPS scores can be used in analysis – e.g. examining common traits among high-scoring respondents. One practice we’ve done in the past is query all low-scoring new users to understand what we could have done better.
With permission, you can also start circulating the results, and using the qualitative feedback in marketing efforts.
We also send the raw NPS results to investors and advisors 🙂
Some gotchas
- Given NPS is user-generated content, keep in mind that their perception (aka score) may differ from the reality of their actions. People with 9s or 10s may love your product but still churn out for a different reason. People could be momentarily ticked off by a bug or something completely unrelated, and give you a 0 that day. My favorite anecdote is asking people why they gave us such low score, and they respond that Contactually is their secret, and they don’t want to tell anyone. Oh well.
- Even when averaging out over a rolling 30, the results can sometimes be spiky. We cheer when it hits new peaks, and try and see what’s happening when it dips. While we focus more on the overall trend, looking at the cohorts gives us some better idea (e.g. a lot of people signing up after a conference on the same day).
- The general goal is to have it be positive (more promoters than detractors), and people will throw out different numbers. Some throw out +50 as a target. You can look at a lot of benchmarks, but with such a wide distribution, it’s clear that there is no best practice here. We just care about the direction and velocity of our score, and focus on making it better each month.