Friday Rant: Of Waves, Quadrants and Vendor Comparisons

The only thing that gets vendors more riled up than having to participate in a relative graphical ranking exercise (e.g., Gartner Magic Quadrant, Forrester Wave) is the result that inevitably disappoints the majority of the participants. Trust me, I was the point person for these rather futile exercises in my earlier years working at a vendor (for some of which we fared quite well, yet had no reason even being on the map, for reasons I'll get in to in a minute). The most recent of these ranking exercises comes courtesy of Forrester research (and also courtesy of Emptoris, who has licensed the results for all to read). In my view -- and feel free to disagree -- this report represents a textbook illustration of how smart analysts can be led astray in their efforts to present a pragmatic and above all useful vendor comparison. I would also like to state in advance that I'm not happy about having to write this piece -- but to not do so would be irresponsible. I hope that my critique can contribute to improving future models for vendor analysis and thereby make them more useful.

In the vendor review process for this Wave after Forrester completed an initial draft, I received calls from multiple vendors who questioned the objectivity of Forrester. I responded in every case that I have no reason to doubt their objectivity -- quite the opposite in fact, I hold their overall objectivity in very high regard. But I do doubt their research process and approach given the result it has provided to the market. Now, don't get me wrong. There are some vendors who, relatively speaking, are right where they deserve to be (e.g., I always recommend that for companies who can afford them, to shortlist Ariba and Emptoris from a sourcing perspective). Whether one is better than the other when it comes to e-sourcing will really come down to the specific needs of an individual customer. But it's always a good idea to look at both.

Where I believe this Wave really misses the boat is in their analysis of the ERP providers and other best of breed vendors. Now, Forrester will say they measured vendors against a defined set of criteria. But ask yourself: how was this criteria determined to begin with? And please bear in mind I'm talking not only about the criteria that they use to rank vendors, but to allow them into the graphical Wave in the first place. Most of it does not come from real-world experience gained from performing sourcing or being deeply involved in deal flow.

It comes from a few places. First, it comes from analysts getting together in a brainstorming session to define what they think important. Second, it comes from asking vendors -- and listening to vendor briefings -- and trying to cull key metrics by which to measure providers. And third -- and all too rarely -- it comes from asking practitioners what they think should matter. It should also come from a constant monitoring of deals to see how vendors are stacking up and who is winning and why. But this level of criteria determination requires that analysts actually get into the deal flow, advising at least a few companies per week, not only on initial vendor selection but staying in the process throughout.

Because Forrester failed, in my view, to gain an accurate picture of the ERP and best of breed providers and how they actually perform in real customer RFP situations, the ultimate Wave outside of Ariba and Emptoris was skewed (and I even have some issues with how they were placed where they were, but more on that later). For example, where the heck is Iasta? They're off the map, that's where, because they did not meet some "defined set of criteria". I'm sorry for the vulgarity of my language in advance, but the exclusion of Iasta is complete bull. Iasta is one of the fastest growing e-sourcing vendors in the market and is the choice of many consulting firms I talk to on a regular basis (sourcing consultants know what is needed in e-sourcing -- trust me on this one). Moreover, Iasta is also in many of the deals that Emptoris and Ariba ultimately end up in. Their exclusion baffles me.

But aside from leaving out Iasta because they did not meet an arbitrary ranking criteria -- which alone shows that Forrester is not in the industry e-sourcing deal flow on a daily basis, because if they were, they would have in part defined their criteria around those vendors who make it into deals -- Forrester also falls flat on their relative rating of Oracle to others. I suspect that Forrester was rating Oracle 12R which we'd probably all agree, is a decent little product. But it lacks many of the core capabilities that have been standard among best of breed providers when it comes to negotiation management, bid analysis, optimization and the like. In fact, Oracle has admittedly remedied a handful of these fundamental shortcomings in R12.1, a product that is not even available in the market yet (but is due out sometime in 2009). How did Oracle score where they did? Because Forrester liked the "tactical" aspect of what they do, whatever that means.

On a different but related rant, to assign 50% of the rating to "company strategy" is absurd on a range of levels (50% of their rating criteria is based on strategy, not product). Why is Forrester in a better position than customers and shareholders to judge strategy? SAP is a prime example. SAP gets knocked for not having a vision relative to Oracle and others. I agree on this front talking to the rank and file SRM team members. But go up a level in the organization and they are one of the more forward thinking organizations at the moment when it comes to what they might produce next. And consider Emptoris -- who scores the highest rating for "strategy". Emptoris' strategy for the past 18 months has largely consisted of its CEO going out and trying to convince a number of companies to take his paper (considering they've not been able to do cash deals, at least until now). That's worked really well, hasn't it? Seen many deals work? Now, not that deals are inherently good -- they're not. But let's be honest and realistic, where do analysts gain the expertise and insight to rate a vendor's overall strategy? Maybe discuss it, but not invent a relative rating.

Perhaps the most telling measure why we should all discount this Wave is that customer satisfaction represents only 10% of the findings. This tells me that either Forrester did not do their homework when it comes to checking enough references or that they do not know how products are actually used in the market. Customer satisfaction, in my view, is one of the most important factors I consider when recommending products to others.

I could go on for pages with other ideas about how I believe providers ought to be evaluated, but I'll compress a few final thoughts into this paragraph. Take the rating of the Peoplesoft product. In my view, it's junk relative to the market when it comes to sourcing (ask five PeopleSoft customers who have also looked at other vendors how it stacks up and tell me if they say anything different). In services procurement and P2P, PeopleSoft fares better, but even to put it on the map for sourcing is crazy, let alone close to SAP (which has a vastly superior product in functional and customer use-case comparison -- in fact a set of products). And what about Zycus? They have a few sourcing customers, but the product is new -- certainly not worthy of being on a comparative map yet until it achieves a longer track record. I also take issue with BravoSolution's placement because I know how they've beaten numerous vendors on capability and references who perform better for some reason in this report. And what of Global eProcure and Perfect? Maybe not the very best products, but they certainly deserve a chance to make it onto the field, so to speak, especially relative to Siemens/UGS and Agentrics, who make their way int

o only a tiny percentage of overall deals.

Rather than go on with my rant, I'll get on with my day. What do you think of this Wave and similar methodologies in general? How can we improve them and make them useful when it comes to both shortlists and practitioner product evaluations? There's got to be a way for these to work. I've long been a fan of the KLAS ranking approaches in healthcare technology, a field I've done some work where I've actually found analyst rankings useful in numerous ways in learning about vendors and products. KLAS relies specifically on customer feedback and analysis for their input. What do you think?

Jason Busch

Discuss this:

Your email address will not be published. Required fields are marked *