Optimization is not optics — it’s obligatory!
10/30/2020
Optimization is making a resurgence for the third time, with a number of suite vendors recently implementing or expanding their capabilities, but will this time be the charm? It should be, especially considering the dire straits the majority of organizations are in as a result of the direct and indirect implications of the COVID-19 pandemic, but the author fears it may not be.
Why? Because vendors are still selling optimization as an optional add-on for “complex” or “high-value” categories, and, moreover, telling you that you can live without it if you want to. And if you don’t want to live without it, you can pay more (as my colleague argues here) and get the chance to save more.
But does this make sense?
Not in the slightest.
- The vendors claim to be selling you a platform for savings but are holding back on one of the two most powerful savings tools in the procurement tool-belt. (The other being a powerful in-depth analytics platform with predictive capabilities.)
- You claim to realize high levels of savings from your traditional (multi-round) RFX and e-auction sourcing events, but how can you make such claims without knowing the minimum you could have spent? In other words, unless multiple baseline optimizations are automatically run for every event, at least one unconstrained scenario, one scenario taking into account current organizational rules (as constraints), and one scenario designed to reflect current goals and constraints, and furthermore, unless these scenarios are automatically compared to the result of the RFX and/or e-auction prior to award, there is no way to know what the lower bound for an award is.
And while one can compare a result to should-cost models, market prices and prior awards — and compute a savings (or loss) from that — one cannot compute how effective the award was.
For example, if the organization was paying a total cost of $15 million for an award and negotiated an award with an associated total cost of $13.5 million saving 10%, it might think based upon current market prices and should-cost models that it was within 5% of optimal. But if an award with a total cost of only $12.5 million was possible, it would be out $1 million and 60% away from optimal.
This is not ridiculous if there are a large number of products, suppliers, carriers, transportation options, lanes, Free Trade Zones (FTZs), internal distribution centers and/or storage centers, etc. Over a decade ago, in “What is Supply Chain Optimization,” published on Next Level Purchasing, the author demonstrated how a scenario with just three suppliers, three products, three locations and a few price breaks could not be optimally solved in a spreadsheet (with an example that cost the organization 4.4%) as “apparent” lowest cost awards or typical selection criteria all led to non-optimal scenarios.
Multiply this from three suppliers to a dozen, three items to 30, a few warehouse locations to a few dozen, dozens of carriers, hundreds of lanes, multiple modes of transportation, and real-world business constraints and it becomes humanly impossible to identify even a near-optimal award without optimization. (And that’s a typical scenario. Imagine a larger scenario with three dozen suppliers, 300 items in a BoM, multi-echelon inventory considerations, dozens of carriers, and thousands of lanes with dozens of business and regulatory constraints!)
Now, when optimization technology exists that makes it quick and easy to get optimal answers to baseline scenarios and when UX has come a long way — and it’s just as quick and easy to use natural language, visual interfaces and wizards to define constraints as it was to define them as equations — there is simply no excuse not to utilize baseline optimization scenarios in every event.
Now, it’s true that no computer-generated award is going to be perfect and that even an optimal award will probably need some tweaking. But the point is not choosing the optimal award, it’s knowing what optimal is so that you can properly identify options and negotiate away unnecessary overspend.
Negotiations need insight.
So, why are vendors hiding this from you? Telling you optimization is optional when its obligatory? Not making it super easy to run automatically generated baseline models for potential award comparisons for insight without effort? Costing you opportunity instead of saving you dollars?
It’s a damn good question, and one that you should demand an answer to.
There was a time when optimization was very expensive because solvers required a lot of hardware power and solver time in their relative immaturity, and when developing usable implementations with immature platform libraries took a lot of manpower, but that was two decades ago. Now, an average small model, which is the majority of sourcing events, solves in seconds and an average mid-size model, which is a significant portion of remaining sourcing events, in minutes. So even though solver licenses still do cost money, they can be split among a large number of client instances, so it can’t be cost (especially when you consider what suite licenses cost).
But what is it? The only other explanations that come to the author’s mind are ignorance or greed. Either the vendor is unaware of the power and importance of optimization or they are aware, and they want to gouge you for it.
Neither is a very good answer, but both are situations you should be aware of. Because, if the vendor has optimization, it should be embedded in the platform, part of every event you run and real easy to use. Otherwise, you probably don’t have the right vendor if your No. 1 goal is to maximize the value from each sourcing event.
And yes, we have to say it. Because, in today’s economic climate, your organization’s survival might depend on it!
-
-
-
-
-
SOURCING04/15/2019
-
-
-
-
-
SOURCING04/15/2019