A.T. Kearney’s ROSMA Procurement Benchmark: The Good, The Bad, and The Implications (Part 1, The Good)

money in the form of some coins

Management consultancy A.T. Kearney has announced a partnership with the Institute for Supply Management (ISM) and the Chartered Institute of Purchasing and Supply (CIPS) to have ISM and CIPS offer A.T. Kearney’s ROSMA service to their respective memberships.

ROSMA (pronounced “raahzma”) stands for Return On Supply Management Assets. It’s a complimentary procurement performance-benchmarking service (and associated benchmark) offered by A.T. Kearney – arguably the best-known management consultancy in procurement. I reviewed the benchmark survey directly, and in addition I had a chance to speak with A.T. Kearney top brass, who were kind enough to answer some of my questions. For those who don’t know about my background, I worked for The Hackett Group from 2004-2013 and helped expand the Hackett Procurement Functional Benchmark and related measurement products/services, so I know the gory details of procurement benchmarking and the competitive landscape for these products/services. With that said, let’s get on with the analysis!

First off, you should probably ignore much of the marketing fluff that’s in the press release, where ROSMA supposedly is “the gateway for the business community to fully understand procurement's efforts and how they relate to shareholder value,” or that “you will be positioned with a quantifiable means to gain credibility and increase your profile as a strategic business partner.” It doesn’t so much “help supply management executives demonstrate the function's financial impact on the bottom line” as it does help them estimate it – but more on this later.

The press release is somewhat correct in saying is that “it is the classic return on investment model for procurement or supply management.” Basically, it is a glorified “Procurement ROI” metric* and divides the hard financial benefits delivered by the procurement organization in an annual benchmark period by the investment made in the procurement “assets” during that period (which is mostly period costs approximated by procurement’s budget). See the below diagram for the basic framework.

Before I dive into the composite metric itself, let’s talk about the ROSMA service, because it is a service (with associated service mark) offered by a service provider. And there’s nothing terribly proprietary neither in the metric itself, nor in its components. That’s actually a good thing, as I’ll discuss later. But let’s get back assessing this service:

ROSMA is offered free of charge to participants – via ISM and CIPS – for a five year period. I’m not sure why it’s five years, but on the other hand, it’s “free” in terms of cash outlay. The price is right!

While the study is free, and it only takes “30 to 45 minutes” to fill out the survey forms, it still requires a fair degree of time investment to collect all the data needed. But, this is a necessary evil. Insight correlates with effort. Luckily, if you’re a devoted Spend Matters reader, you’re an avid adopter of technology and have the data you need for spend, influence, savings, project tracking, spend compliance, procurement costs, etc. – right? However, if you don’t have this, A.T. Kearney will be happy to help you! Actually, A.T. Kearney Procurement Performance Management (PPM) solution offering, which we’ll discuss later, is actually a very compelling one that incorporates ROSMA.

ROSMA is an annual benchmark. Although there is some “well aged” data in the database, it is available to use right now for current benchmark participants. In future years, hopefully there will be enough data to keep it rolling annually with fresh data and to allow some different peer group comparisons (based on company/spend size, manufacturing vs. service industries, etc.). The new data sets and reports created for ISM/CIPS will be in the spring timeframe based on previous year’s data.

As a bit of historical perspective, ROSMA actually started from some work done with Anheuser-Busch InBev (ABI) and it fits well with the performance management approach taken there (e.g., use of zero based budgeting and other performance management techniques that ensure that all costs/investments are truly justified).   This value-focus was also apparent during some of the post merger activities when InBev purchased AB (as discussed in the WSJ article here).  Tony Milikin, ABI’s Chief Procurement Officer (a former client of mine and a well respected CPO in the industry) is quoted in the ROSMA press release and has been a strong advocate of not just ROSMA, but also the PPM toolset that helps "industrialize" the planning, execution, and 'ROI' of the portfolio of sourcing projects (and other projects too).  The PPM tool is also designed to measure which 'chessboard" strategies were applied and how to 'divvy up the credits' on the value generated.  It's a bit of a Tayloristic view of the "sourcing factory", but it is a closed-loop system that can be applied to some extent to any of the multiple eSourcing technology providers who have some of this project portfolio management approach built into their applications.  Anyway, back to ROSMA...

A.T. Kearney has also socialized the ROSMA metric with other clients and has also baked it into its bi-annual pseudo-benchmark research study called the Assessment of Excellence in Procurement (AEP). The AEP study is an A.T. Kearney one, and not related to ISM / CIPS. A.T. Kearney is also integrating ROSMA with its PPM service, which is a more granular solution offering that ties the measurement down to an execution level via procurement projects and associated procurement employees. I’ll cover PPM, the A.T. Kearney offering, and the broader topic, in future posts. But if you want to dig into some related research, check out our piece on Supply Performance Management and on PMOs in Procurement.

The service allows a company to measure different groups within a company as separate benchmarks, but it doesn’t yet do the cross-group comparisons within the tool. That’s a value-added service that A.T. Kearney would have to perform – for now at least.

The benchmark collects savings/value data for four mega buckets of spend: Direct, Indirect, CapEx, and Goods for Resale. Classifying spend is by no means an exact science. For example, CapEx is an accounting designation whereas Indirect is not. Actually, this isn’t true (i.e., indirect vs. direct overhead allocations), but let’s not confuse things.

Although there was a check box in the study that gives A.T. Kearney rights to the data submitted, one of the attractive features touted by A.T. Kearney management was that a user can opt out or opt in as to whether someone from A.T. Kearney will call them or not. I didn’t verify this in the tool, but will assume that it’s in place. Bravo. This is an important feature.

There is no capability assessment in the benchmark, although it does have some implied capability measurement as I mentioned before. This is actually a good thing! Still, if a firm opts to use A.T. Kearney’s online PPM assessment tool (which includes the ROSMA performance component), they can also do a capability assessment, such as using the A.T. Kearney “chessboard” (a smorgasbord of procurement practices jammed into the requisite consultant 2x2 matrix – except it’s on steroids and is an 8x8 grid). Hopefully, A.T. Kearney will keep a “top performer” peer group based on performance only, not on a combination of performance metrics and practices. If you want “top capability,” do that separately. I talked about the separation of value (what procurement services add value to stakeholders – in the language of supply performance metrics) vs. performance (service levels delivered against those services – including procurement-specific metrics and supply performance metrics) vs. capabilities (and supporting practices, tools, intelligence, etc.) here and here – and even did a whole study on the topic here for CIPSA. By keeping the measurement framework clean, you can then credibly correlate practices (and company/environmental factors) to actual performance and thereby determine which chessboard strategies impact which performance metrics by how much. When I was at Hackett, we called those with correlations to overall procurement performance as “certified best practices.” Yes, it’s good marketing, but it does help show the level of impact. On the contrary, if you co-mingle performance and practices and then create a leader vs. follower analysis based on both, the peer group gaps then become due in part to self-serving math.

Finally, one of the good things about ROSMA is that it least brings awareness to the problem of things like:

  • Matching investments in procurement processes to the related spend scope (i.e., total spend, influenced spend, actively sourced spend) – and the value created
  • Identified savings vs. implemented savings (and the issue of non-compliance)
  • Budget impact and demand management vs. just price/cost reduction
  • Value beyond cost reduction
  • Cost reduction vs. cost avoidance
  • Dealing with procurement processes occurring outside the formal procurement organization

Unfortunately, the current incarnation of this benchmark does not actually address these issues terribly well. I will write about these issues in the next installment of this series. It’s nothing to get freaked out about, but there are some things that can definitely be improved, and many can be addressed fairly easily before the next release of it. This is an area that just has some inherently difficult issues. I will also provide some insight into some of the issues regarding the lower level metric components to this composite.

So, what’s the conclusion here? The bottom line is that A.T. Kearney is bringing a valuable “freemium” performance benchmark offering to ISM and CIPS members. For ISM and CIPS, it’s a win for the membership, and the two associations can use the insights they gain from these A.T. Kearney products to help them tune their own value propositions (e.g., training services, which is a growing revenue stream for both).

Yet the benchmark is by no means perfect. It’s a good start, and it is good for the industry where procurement practitioners have had to scrounge around for benchmarks from different sources. It is also a wake-up call to Hackett, who should re-think their “freemium” strategy beyond their gold standard P2P-only freemium benchmark that I describe here.

And for A.T. Kearney, this is also obviously a win as it has harnessed its tight relationship with ISM and CIPS to help bolster its dominant position in the procurement consulting industry. Of course, one could argue that such a relationship has historically been too cozy, as we wrote here back in 2008. But that’s water under the bridge that has passed with previous leadership, and now, consultants are currently off the board (except Steve Miller at Accenture who was at Disney when he was elected).

We have nothing against such collaborations such as the ISM / A.T. Kearney Center for Strategic Supply Leadership (CSSL), especially if it serves the missions of ISM and CIPS to help elevate the role of the profession. However, they must be transparent and open to all, and ISM seems to be turning a new leaf here. Heck, the fact that ISM partners with little-old-us on our topical snap polls is testament to their new openness, new thinking, and new leadership under Tom Derry.

But, let’s get back to ROSMA and to some important questions:

  • Does ROSMA help elevate procurement’s role – or does it actually work against it?
  • Is the metric itself ready for prime time? If not, what needs to be fixed?
  • Should a commercial management consultancy with a vested interest in project-based engagements be the one to drive this effort?
  • Finally, what is the impact of ROSMA on the broader benchmarking and consulting/advisory market?

I’ll deal with these questions in subsequent posts, as I’m already way over our word count on this one. Stay tuned!

*Note: Technically, real ROI as expected by a CFO is calculated by Return minus Investment and then divided by Investment. It is (R-I)/I rather than just R/I. Still, the “Procurement ROI” metric being used here is simpler, more favorable to procurement, and if CFOs don’t mind, then let’s move on. Besides, if you had to pick one procurement-centric metric (rather than an enterprise centric one such as a % improvement to EV, EVA, ROIC, ROCE, ROA, CROCI, etc.), this is a good one that I’ve seen many procurement organizations use to reflect the more easily captured hard-dollar benefits.

First Voice

  1. kris colby:

    Insight correlates with effort. Agree whole-heartedly. More importantly, it would be a huge win for both practitioners and providers if we could consolidate the number of benchmarking surveys that folks are asked to participate in. Not only would this reduce the “survey time” of participants, it would also create a better data set from which to draw insights. As a provider, I would welcome collaboration with multiple others in order to achieve better response rates and happier customers.

Discuss this:

Your email address will not be published. Required fields are marked *