Back to Hub

Why should-cost models aren’t sufficient to drive market-competitive cost structures

Adobe Stock

This content does not express the views or opinions of Spend Matters.

Apple has made famous the use of ‘should-cost’ modeling and benchmarking as a cornerstone of effective procurement organizations for the past few decades, which has led many companies, big and small, to adopt the practice. The momentum around this ‘best practice’ has been further accelerated by the marketing efforts of market intelligence companies selling data and even a few purpose-built software platforms to make these efforts more efficient and accurate.

While these models offer valuable insights into cost structures that are useful for driving an understanding of market dynamics and setting category strategy, especially when used in conjunction with other tools like SWOT and Porter’s Five Forces analyses, they also come with significant limitations when companies attempt to use them to guide negotiations and evaluate negotiation outcomes.

This article will outline those limitations and propose that invoking competitive negotiations with machine learning solutions such as Arkestro is the optimal way to evaluate the market competitiveness of price quotes.

These models don’t scale to all categories or all items within a category

To get to the level of sophistication for a should-cost to be effective as a true measure of a competitive market price requires both a high amount of category expertise, as well as time to create the model and back-test it against past results. This, given the productivity constraints of modern strategic sourcing teams, means that they often must choose between refining the assumptions and calculations in their model and doing other things, like studying the broader market, building supplier relationships or solving tactical challenges like shortages.

Even where benchmarking data sets do exist and little modeling is necessary, it still requires bandwidth and a skilled eye to vet the models and data sets across every item within a category for every category of spend. This is a cost to the organization on top of the ‘hard’ cost required to evaluate, purchase and operationalize an accurate, vetted dataset for the many categories a typical company might be buying.

Then, if a company has developed models and/or benchmarking data for a set of their categories, when used as the target, or measure of a ‘good’ quote from a supplier, it requires additional human attention if the cost that is quoted from a supplier is different from the modeled cost. Whether it is higher or lower than the ‘should’ cost, the question is always, is the model correct or is the quote competitive? Finding the answer to this question across tens, hundreds, or thousands of parts can create a wasteful cycle of time-consuming analysis.

The core assumption of should-cost analysis is flawed

Should-cost models assume cost-plus pricing, but most companies, especially in competitive industries, use value-based, dynamic, or competitive pricing strategies to optimize profits by focusing on customer value rather than production costs.

IP-dependent products like microprocessors and pharmaceuticals are priced far above production costs to recover R&D investments and reflect consumer value. Similarly, Apple commands higher margins than competitors like Dell due to its brand and the loyalty of its customers, even for functionally similar products. Additionally, external factors like government subsidies and regional cost differences complicate accurate pricing predictions. Thus, should-cost models often misrepresent what constitutes a ‘good’ price.

Pricing is a market construct, not an absolute calculation

Apple excels at using should-costing to drive negotiations, thanks to its extensive resources and market dominance, which allow it to act as a ‘price maker.’ However, few companies can replicate this approach due to limited resources and market power.

For most companies, pricing is influenced by market dynamics like supply, demand, competition, and customer willingness to pay, which should-cost models often overlook. These models provide theoretical estimates but fail to capture real-world pricing complexities. For example, two suppliers with similar costs may price differently based on market share strategies or niche competition.

Uber illustrates this gap — if one were to estimate a ‘should cost’ for a ride by calculating wages, mileage and depreciation and applying a profit margin, they would miss market and timing factors like driver availability and competition with Lyft or taxis that also influence the price. The actual price is only clear when set in a competitive market context. Thus, while cost structures offer insights, they rarely reflect the actual prices suppliers quote.

A new way

Machine learning models like Arkestro’s, leverage data from thousands of sourcing events, analyzing item attributes (like past prices and ‘should cost’) and competitive factors (e.g., supplier count, event size). This approach predicts discounts and suggests starting prices for any number of items, scaling to categories without accurate ‘should cost’ data.

When these ‘intelligent’ offers are provided to suppliers as the anchor price for negotiations, our studies show suppliers are over twice as aggressive with their discounting behavior. This ‘intelligent first offer’ approach, combined with Arkestro’s automated dynamic bidding model that is informed by game theory and human behavior, creates a fast, effective and automated approach to drive competition and collect market-competitive pricing for any good or service.

We believe Arkestro has the technology to drive effective negotiations across all of your categories and line items, direct and indirect, without the dependencies on (or cost of) accurate market intelligence data and models. This enables companies to drive better deals faster and exceed the business outcomes they sought with should-cost modeling and benchmarking.

About Matthew

Matt Mills joined Arkestro in 2022 as a solutions consultant, using his expertise to support GTM efforts in key verticals. Prior to joining Arkestro, Matt led pre-sales solutions consulting, customer success and value engineering practices at numerous early-stage, SaaS software companies in the supply chain and procurement space, such as Supplyframe, ONE Network, Everstream Analytics, LevaData, and Resilinc.

Matt also brings more than 15 years of high-tech operations experience from companies such as Dell, EMC, and Flextronics where he held a variety of roles with supply chain organizations including finance, new product introduction, digital strategy, risk management and strategic sourcing.