We’ve previously covered A.T. Kearney’s ROSMA benchmark in two installments thus far, links to which you can find at the bottom of this post. In today’s analysis, we’ll consider whether the ROSMA metric is “ready for prime time,” and if not, what areas we might consider addressing. But perhaps we should begin by posing the question and answering it: Is ROSMA ready for all the attention it will receive because of its ISM and CIPS affiliation?
Almost, but not quite. But more important, it is only one metric – not the metric. Of course, there is no such thing as the metric anyway unless it is a percentage improvement figure to a single enterprise valuation number that the firm might use (e.g., ROIC). We’ll return to this topic later.
Certainly, there are some areas to improve— not just ROSMA itself, but also the guidance (or lack thereof) provided in the survey instrument to gather data as the ROSMA input. We won’t hang out all the dirty laundry here and will provide A.T. Kearney some detailed feedback and recommendations offline. But in this series, we’ll continue to provide insight and then follow up with a generic “FAQ” that practitioners can use to evaluate any procurement benchmark service provider and also to facilitate a dialogue internally to assess the measurement of procurement value creation.
Readers will see that the variation in the assumptions on the measurement instrument are large enough that it becomes very important to have consistent measurement, and therefore, consistent guidance that is not proprietary (more on this topic later).
But does the metric work? And for what?
ROSMA at the highest level seems to be a good metric in terms of matching financial benefits to financial investment. It provides a DuPont Model type of metric decomposition into both elements. On the investment side, it includes “period costs,” similar to what many procurement organizations sometimes call “OpEx” (Operating Expense) – and correlates closely to procurement’s budgets.
It also includes some capitalized costs (e.g. for big procurement projects), and thus the term “assets.” It’s not the best name since most costs are in fact period costs, but I understand why A.T. Kearney shied away from the term “investment” (CFOs often have narrow definitions for this – and marketing procurement staff have used this term to mean the spend itself).
Anyway, the bigger philosophical issue is that ROSMA is more of a metric on the Returns Of the Purchasing Department (ROPD?) than measuring the returns on supply management processes that may be performed by resources that don’t report into purchasing (or that aren’t fully dedicated to procurement). Such resources are increasingly collaborating with procurement in cross-functional and cross-enterprise teams – and on a fractional FTE basis.
So, the metric seems to be heavily biased to justifying the procurement department’s existence rather than through the broader value created via best-practices-driven supply management processes that are transformed by procurement leadership.
Also, ROPD, er, ROSMA, doesn’t deal with the issue of measuring the performance of spending not influenced by the formal purchasing department. If key supply lines are shut down, or suppliers create bad publicity, or purchase prices spike in spending areas that are NOT influenced by the procurement department, did the enterprise process for supply management work well? No.
Conversely, if procurement cherry-picks its opportunities to make its ROI metric look good while the rest of the business is bleeding cash, those spend owners won’t be overly pleased to see Purchasing getting its bonus check while they are suffering. So, it’s better to benchmark the non-procurement-managed spend too and compare its performance (and capability adoption) to properly managed spend – in order to help build a solid business case for change.
On the benefits side, there is a decomposition to managed/influenced spend and then down to a further subset of actively managed spend via procurement ‘initiatives’ that deliver measured benefits within the benchmark year. This is where A.T. Kearney’s PPM tool “plugs in” to align staff to the projects that atomically create those benefits.
This initiatives-based spend is:
- Based upon managed/influenced spend, but such influence is treated as a binary thing. Spend is either managed by procurement or it’s not. A late-stage quick RFx process is treated the same as early involvement with deep influence and best practice adoption.
- Biased towards hard cost reductions, but also allows demand-side benefits capture (and therefore “spend benchmarking” rather than just cost benchmarking). This creates an issue that we’ll discuss later.
- Inclusive of an estimated compliance percentage metric that is used to adjust the validated anticipated savings in order to create an actual savings estimate.
- Tracked at the mega-category level (i.e., Indirect, Direct, CapEx, Goods for Resale) and also has a separate “soft” benefits bucket targeted at cost avoidance (e.g., against supply market performance).
- Not inclusive of broader hard dollar value components such as TCO elements. These elements are picked up at a broader managed/influence spend bucket that is called “beyond spend.” This is bit of a confusing non-industry term that is not really explained in the survey text, but refers to TCO-related cost metrics (including capital costs reduced from supply-side working capital improvements) and even some net-income-related impact from top-line oriented projects.
OK, so what does this benefits approach really mean?
First, the ROSMA benchmark framework doesn’t measure the actual procurement value delivered to the financial statements. In other words, it doesn’t exactly measure the actual savings booked and the budgets reduced – nor should it! Rather, it looks at the negotiated savings and then uses a user-provided compliance estimate to create an estimated savings number.
Still, this methodology is actually fine! Why? Because only a portion of value created should be passed on to shareholders via budget reduction (i.e., a discussion between budget holders and Finance controllers).
It was confirmed in our interview with A.T. Kearney top management that this is how it works. However, the survey instrument itself tells the user to include demand management effects and to record the net budget impact that was formally validated by the organization. Yet, validation schemes vary highly and such variation can have some level of impact on comparability. The bigger problem though is expecting practitioners to be able to quantify the demand management effects that are not actually the amount taken out of the budget.
Our suspicion is that many will interpret the demand-management-inclusive aspect of the question to mean how much a budget was reduced – and this as mentioned before is not procurement’s job. You can’t have it both ways to measure this. At best, the help text needs to be much clearer to the user regarding the factors that we will describe in the next piece: cost vs. volume variance; benefit duration; benefit timing (relative to financial period); benefit type; level-of-influence (not just a binary influence vs. no-influence); matching resources to value (independent of reporting relationship); counting the “credits” for the value created, etc.
In conclusion (for today) …
More broadly, we’ve described some of the multiple benchmarking decisions that have been made by A.T. Kearney (in collaboration with some of their clients) about how to model different types of procurement-driven benefits in ROSMA. But, rather than publicly debating the pros and cons of each of these decisions, and the supporting survey instrument, we’ll give some feedback to the firm offline, and as mentioned before, will also publish a separate FAQ that will provide a comprehensive list of questions that every practitioner organization must ask its current/prospective procurement benchmark service providers – and also ask itself. Just having a frank dialogue between Procurement and Finance on these questions is an important start to common understanding of the problem (and even the basic terminology) before attempting to solve it individually or through any one commercial services firm.
This last point of course brings up the next question and the proverbial elephant in the room: “Should a commercial management consultancy be the one to drive this effort?”
Stay tuned for the next installment of this series later today.