Prioritization is arguably the most important part of product management.  We have lots of features to build and not enough resources to build them all.  What do we build? What do we hold off on? Which features are worth the extra effort?

A typical product brief will include:

  • Context - Tells the designers, engineers, and stakeholders why we are building this feature
  • Overview of Solution - Summarizes the solutions and the rationale behind why we are building this feature
  • Wireframes - Visuals to explain on the solution above
  • User Stories - Describes the basic requirements to the engineering team and designers

A data-driven Product Manager will also include an Impact section. This section leverages existing data to quantify the potential impact on the bottom line if we build this feature. In this blog, I'll propose that all PMs adopt the impact section and use it to prioritize their backlog of features.

The Impact Section of a Product Brief

Let's consider an example feature improvement:
Flagging optional questions in the checkout flow.

The Impact section is broken down into three components.

1. USER OUTCOME - How will the user's behavior change (and how that's measured)

Ex. Users will be able to complete the checkout flow faster (time from checkout start to checkout complete)

2. GOAL - What is the bottom-line KPI that we're trying to improve?

Ex. Increase checkout conversion rate

3. POTENTIAL IMPACT - What is the impact (on the goal) we can expect from changing customer behavior?

Ex. We could increase overall checkout conversion by 15% from (54% to 62%), if we were able to get 20% of users to complete checkout in under 3 minutes.

By deciding the user outcome and goal upfront, we can quantify the potential impact of the new feature using data. Thus, making it much easier to prioritize features in the backlog.

An example: Improving a SaaS Platform for Engineers

Let's take a real-world example. This story comes from one of the companies I work with. They actually recommended we write this blog to share the story because of the impact it had on their roadmap.

The Company: A SaaS platform for engineering teams to manage their software development workflow.

The Feature: A redesigned "Add Users" page. They received feedback that it was hard to use, so they wanted to redesign it.

Now for the Impact section...

User Outcome: Increase total users added in the first week.
By redesigning the page, their customers could add more users to their platform during the trial period. This would let them experience the full potential of the platform because they could add their whole team.

Goal: Increase Trial to Subscription Conversion Rate
If they can add more users during the trial, they'll be more invested in the product and will be more likely to become a paid subscriber.

Quantifying the Potential Impact

Last step was to quantify the potential impact. To get the answers for this, they started with the data. They ran an analysis using historical data to understand how the number of users added in the first week influences the trial to subscription conversion rate.

This analysis is what my company, Narrator, generates in minutes using data from their data warehouse

Example of Narrator's Auto-generated Analysis to Quantify Potential Impact

The analysis told us that the user outcome was not actionable, meaning it had no impact on our goal! When a user adds more users to the platform, it doesn't make a difference whether or not they convert. We reached this conclusion through a series of checks (consistent impact over time, statistical significance). You can read more about the approach used in that analysis here.

If the analysis was actionable the analysis would tell them the perfect number of users a customer has to add to maximize the conversion rate. It would've also given them the expected impact on our KPI.
Ex. We can increase Trial conversion rate by 10% if we get half of the users to add more than 4 people to the platform.

Example Impact Simulation for an actionable insight

Then they could use the impact size to compare the user page update to other features in their backlog. If it was the feature with the biggest impact, the choice is easy!

No Impact. Now What?

Alright, so adding more users doesn't help trial conversion rate. But intuitively the team knew that there were benefits to revamping the user page. Next, they tested if the time it took to add a user influenced the conversion rate. The idea was that the speed to add a user was a proxy for an improved experience on the user page. This was actionable! So instead, they optimized the page for speed instead of # of users.

This process of tweaking the user outcome may seem like cheating - but it's not a bad thing! Instead, the product team is iteratively testing their hypotheses about the user behaviors that matter (even before they start building)! And as a result, they'll end up with a feature that's more likely to be successful.

Sharing the results

After launching the new user page, they monitored the behavior to understand if they were able to decrease the time to add a new user. And then see if that really moved the needle for trial conversion!

The final benefit to the impact section is the ability to communicate impact to the organization.

"We built a new "Add User" page, so our users could add new people quickly. We decreased the time to add a new user by 5.2 seconds and improved trial conversion by 3.1%. Here's an analysis to show our expectations pre-launch, and the true impact it had."


The Impact section is a crucial part in making sure we build the right features with the right goal!

With this small change, product teams will be able to:

  1. Always prioritize based on the highest impact features
  2. Deliver clear goals to design and engineering
  3. Allow PMs to monitor the impact and communicate the results

The resistance

The biggest obstacle Product Managers face when implementing the Impact section is access to the data or a data scientist to run the analysis. When it comes to data, PMs are usually faced with a whole slew of roadblocks:

  • "No access to the data"
  • "This analysis will take too long"
  • "The data scientists are busy"
  • "The data isn't trustworthy"

With Narrator, product managers can automatically set up the exact analysis needed to quantify the impact and track the results (in minutes, without a data scientist!). We've seen many companies struggle to bring data into their product development process, but they're amazed at how simple and fast it is when they start using Narrator.

Book a 45-minute hands-on demo with us to see how you can use Narrator to generate these analyses and monitor the results using your data too.