Innovation ROI: What’s YOUR best practice for little “i”?

Extending features of existing products (that is… little “i”) versus big “I” (new product innovation), has gotten a lot of debate on the best way to prioritize product requirements.

Let’s face it… the features that often move to the top are those that the most profitable customer needs or what the CEO thinks is important. In many cases, legacy product request prioritization is not scientific. But should it be?

If this was a perfect world, how should a good product manager prioritize features in a release? Do you segment and rank benefits that are closest in congruence with long term business strategy…. those that return the most profit over the near term… those that are easiest to develop… etc.?

It seems everyone has their own approach… their own “best practice.” It seems most product managers assign attributes to each feature. Examples of these include:

* Build customer loyalty
* Create competitive advantage
* Further business strategy
* …and, of course, the near term financial contribution (more revenue, lower cost, etc)

Aligning each feature request to a list of attributes is vital. As business cycles change, so does product investment. In a down cycle, for example, management may want to advance features that provide revenue sooner versus investing in features that return value further out.

Most product managers that I speak with assign a weighting to each benefit on a scale of 1-10 based on the stakeholder voting. They then calculate a weighted sum score of all the benefits in a release to get to a “Total Return.”

Then they assign a cost value to implement the “Investment” in the release again on a scale of 1-10. Dividing the “Total Return” by “Investment” will give a simple (real simple) ROI Metric for the release. The goal is to craft a portfolio of features that best meets the needs of the stakeholders.

You can then use it to prioritize the features and get a straightforward structured method to prioritize the features in each release. This type of a methodology is much more repeatable and defensible than ad-hoc judgments often used to prioritize requirements and features.

Obviously, the trick is to get the stakeholders to value the various features similarly. Value agreement isn’t expected to be exact, but outliers can be a problem. For example, the client user committee may value highest those features that give more immediate operational relief (i.e., short-term focused), the sales reps may value the features that they have already promised to their sales prospects (i.e., whatever generates new sales in the next quarter), and the marketing team may value highest those features that improves competitive advantage in that new market segment that have been promoting.

Consensus? I think not.

Years ago, a mentor told me that being a product manager is about balance. And she was right. All stakeholders are right… all the time. Disagreements in feature value assignment is largely due to perspective. Like skiing on ice, its all about negotiating the bumps and achieving perfect balance.

One response to “Innovation ROI: What’s YOUR best practice for little “i”?

  1. In my last role in PM I was dealing with a product portfolio where approx. 20% of the revenue came from new license sales and 35% on existing contracts. The rest of the revenue came from other services like consulting and training.

    An additional challenge was that the 20-35 was 10-60 in some countries, and 40-20 in others. So it was always a balancing act between driving new business and protecting existing business by delivering value for money on contractual obligations.

    I therefore used to start by setting a theme for the new release. This theme was primarily driven by business strategy and competitive positioning. (And would thus indirectly protect the installed base and drive new business.)

    The new feature-set would be based on the theme and selected from the overall list of enhancement requests collected from all over the globe. Then local teams were asked to prioritize the selected set.

    In most cases the priorities were pretty much aligned, but if there were differences it was mostly very similar in each global region (North America, Europe, etc.). So worst case scenario we had to weigh the priorities based on regional contribution to the development, which in its part was again based on share of the overall business.

    In rare occasions this approach also created the opportunity for a region to provide additional funding for temporary extension of the development and QA teams to ensure that some of the features that missed the cut were guaranteed. This was only done when there was no risk to the rest of the project and obviously pushed local/regional management to a very clear ROI calculation / business case to justify the additional spending.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s