Back to Hub

AI in Sourcing Optimization Tomorrow

02/19/2019 By

Our last article recounted the story of artificial intelligence in optimization today, or, more accurately the lack of AI in optimization today.

While AI in its most basic form of “assisted intelligence” is readily available in many modern procurement and sourcing platforms, as evidenced in our previous briefings (AI in Procurement and AI in Sourcing), it has not yet creeped into optimization. The most advanced platforms have limited themselves to easy constraint creation, data verification and detection of hard constraints that prevent solutions — as in the case of Coupa — or easy data population, wizard-based scenario creation (using standard model templates), and automation — as in the case of Keelvar. In the former case, the underlying statistical algorithms can be found at the heart of some modern machine learning technologies (but aren’t quite there), and in the latter case, the robotic process automation (RPA) is nothing more than an automated, manually defined, workflow.

But that doesn’t mean that AI won’t creep into optimization tomorrow. While it may not with the current vendors on the market (for different reasons with each vendor), that doesn’t mean that the next vendor to bring an optimization solution to the market won’t learn from the oversights of its predecessors and bring some obvious advancements to the table — especially when certain vendors are releasing their platforms with an open API to support an Intel-inside-like model where sourcing or AI vendors can build on leading optimization foundations to offer something truly differentiated.

And what could those differentiators be? We’ll get to that, but first let’s review the premise.

Simply put, in the traditional sense of the abbreviation, there is no AI, or artificial intelligence, in any source-to-pay application today, as there is no AI in any enterprise software today. Algorithms are getting more advanced by the day, the data sets they can train on are getting bigger by the day, and the predictions and computations are getting more accurate by the day — but it’s just computations. Like your old HP calculators, computers are still dumb as door knobs even though they can compute a million times faster.

However, with weaker definitions of the term, we have elements of AI in our platforms today. Assisted intelligence capabilities are beginning to become common in best-of-breed applications and platforms, and “augmented intelligence” capabilities are starting to hit the market for point-based problems. For example, tomorrow’s procurement technologies will buy on your behalf automatically and invisibly, automatically detect opportunities, and even identify emerging categories.

But if AI is going to take root, it has to take root everywhere, and that includes sourcing optimization. So what could we see tomorrow?

Let’s step back and review what optimization does. It takes a set of costs, constraints and goals, and then it determines an award scenario that maximizes the goals subject to the constraints and the costs provided. So where could AI help? Let’s start with the obvious:

  • Cost collection and verification
  • Constraint identification and pre-verification
  • Objective definition
  • Soft constraint relaxation suggestions

Cost collection and verification

While not part of optimization per se, accurate costs are critical for a successful application of optimization.

In an optimization model, if even one parameter (cost, allocation limit, capacity limit, etc.) is wrong, the entire model — even if it contains hundreds of thousands of variables and tens of thousands of equations, is wrong. And if the mistake makes a constraint too limiting, a cost too low or a goal too attractive, then the solution will be limited by the constraint, awarded to the wrong supplier or over-aligned with the wrong organizational goal.

A great optimization solution will do two things. It will help with cost collection and then help with verification. With cost collection, it will help the organization pull in costs (or bids) from the source — from the RFX, from the bid spreadsheet, from the freight rate exchange, from wherever the costs are.

Then, it will run statistical and outlier analysis across all data elements from multiple perspectives — bidding tiers, same product, same supplier, same lane (and mode), etc. — and use different algorithms — variance, trend, etc. — to identify potential outliers that should be verified by a human before a model is created and optimized.

In other words, just verifying expected costs aren’t null aren’t enough for a sophisticated optimization solution.

Constraint identification and pre-verification

The best solutions today use “wizards” that ask buyers about whether or not the suppliers have any capacity limits, the buyers still have any existing contracts in place to enforce that would require forced allocations, whether they have any risks they want to address through split allocations (across suppliers, regions, etc.), and whether there are any absolute or average qualitative requirements they want to capture (to the extent that the optimization platform supports the constraints). This is great for the average buyer who couldn’t write an equation to save his job, but for a buyer who doesn’t just want to use optimization, but master optimization, this is not good enough.

Tomorrow’s optimization platforms must do two things when it comes to constraints: 

  1. Pre-verify constraints: Especially when this can be done through just the smart application of classic computation and simple algorithms. For example, it’s pretty simple to check that a forced allocation from a supplier does not exceed the supplier’s capacity or the buyer’s demand. Same goes for qualitative constraints. And while some forced splits through risk-mitigation can be pretty complicated, nothing stops the platform from using the core simplex algorithm itself to determine solvability of a model.
  2. Identify Missing Constraints: This is where AI will start to creep into optimization — not in the solver itself, but in the formation of the model. While it’s pretty obvious to most buyers to include any capacity limits given to them by suppliers, and any allocations still in place from existing contracts, it’s not always as obvious as to when qualitative constraints should be included and when risk mitigation constraints are required — and what form they should take.

This is where ML (machine learning) models are applied to past scenarios to identify typical award behavior, to similar events with optimization models to identify typical constraint patterns, and to business documents and event guidelines to identify event and award policies that might be overlooked. From this, the solution should be able to recommend constraints that should be included, what they should look like, and even what parameters or bounds should be considered. For example, if the award policy is to minimize risk and the business has typically dual-sourced related products from two geographically remote suppliers, the application will recommend a risk-mitigation constraint to dual source from two geographically remote locations, suggest geographic groupings of suppliers, and even suggest a minimum award to each of the two suppliers to ensure a reasonable distribution.

Objective definition

Let’s face it, it’s not always the lowest cost award that’s the best award, and a good optimization solution will allow multiple objectives to be analyzed — cost, risk, quality, lead time, etc. — individually and in balance. A great optimization solution will not only use ML to identify suggested constraints but also to suggest the objective function. If risk is high, not only should the solution suggest dual sourcing and other risk mitigation constraints, but it should also suggest a balanced objective (function) that balances low cost versus expected disruption cost based on the risk profile. Disruptions due to man-made and natural incidents and disasters have been on the rise for over a decade, and we are now at the point where most organizations involved in (global) sourcing have only a 1 in 10 chance of not experiencing a (significant) disruption in any given year, quantitatively demonstrating that not all risks can be mitigated.

That’s why it’s important to not only select low-risk awards, but those awards where a disruption would likely incur the lowest cost should one occur when the risk is higher than average.

More specifically, when dual sourcing, it should be the case that both suppliers could produce and supply more volume than they are awarded, at least for a short time, should one supplier experience a disruption due to plant failure, weather disaster, temporary border or port closing, etc. And it should also be the case that there are multiple shipping/carrier options for the same reason. Similarly, when choosing between two materials, parts or products to fill a need — where each has similar market value — the one that could most easily be replicated by the supply base as a whole should be the more obvious option in case one supplier goes bankrupt or is taken out of commission for months due to a significant (natural) disaster (as this would minimize the disruption cost). And so on.

The solution should make use of market data and trends and risk profiles to produce risk scores, disruption likelihoods and cost likelihoods using advanced ML models to not only recommend constraints, but objective functions that balance cost and risk profiles, especially when risk is high or disruption could be extremely costly to the business. (For commodity products where demand exceeds supply and there are a dozen suppliers for every one awarded, risk and disruption cost is not an issue, and the product should recommend a pure focus on cost. But for specialized products that depend on rare minerals, can only be produced by a small supply base after substantial line customizations, and that contribute significantly to the company’s profit margin, risk and disruption cost has to be taken into account.)

Soft constraint (relaxation) suggestions

There are two categories of constraints — hard, which must be adhered to in the solution; and soft, which should be adhered to unless the model would be unsolvable, in which case it can be relaxed further until the model is solvable (by default, and relaxed further still if doing so would result in a lower cost, risk, etc. scenario). Most optimization solutions today only support hard constraints. But soft constraints are important because they can prevent unsolvable models, capture desires but not absolutes, and, in more advanced solutions, specify trade-offs that allow a buyer to make good award decisions and balance not only cost vs. risk but cost vs. quality, or OTD or other metrics.

Tomorrow, a good solution will:

  • Suggest whether a constraint will be hard when it suggests a constraint
  • Suggest whether a buyer should consider softening a constraint because it (could) highly restrict the solution (combined with other constraints) or greatly increase the cost of the solution (against the objective)
  • Suggest the parameters under which a constraint could/should be relaxed (e.g., the quality constraint can be reduced by 1% for every 10,000 in savings)
  • Suggest an auto-softening rule when a (set of) hard constraint(s) prevent solution to the model (and what the impact would be)

But it doesn’t have to stop there. Assisted intelligence could also help buyers:

  • Decide what suggested opportunities to pursue
  • Decide how to pursue those opportunities
  • Decide what models should be used in sourcing scenarios

What opportunities should be pursued?

As indicated in the introduction, tomorrow’s procurement technologies will buy on your behalf automatically and invisibly, automatically detect (potential) opportunities and even identify emerging categories.

But even if the market conditions that the ML algorithms work on suggest opportunities, it doesn’t mean that all should be pursued, or at least pursued now. Only the ones that will deliver a true TCO for the organization should be considered, and that requires balancing current commitments, potential cost, risk and qualitative metrics in an optimization model that can fully account for all factors and model the right organizational awards. The market conditions might scream yes, but the particular requirements of engineering, marketing and sourcing (that needs to maintain a volume-based leverage with a strategic supplier) might mean that not all of the conditions can be taken advantage of and the opportunity should wait.

Similarly, an potential opportunity that is only hinted at by the market conditions (and barely reaches the tolerance for an alert) might be perfect because, when the full situation is considered, the risk factor could be so much lower from the introduction of a new supplier with a new technology that lowers cost and increases quality that the overall value to the organization is three times what the market costs alone indicate. This again requires smart optimization solutions that can absorb all this data, run models and analyze suggested awards against current awards and buying patterns. 

How should those opportunities be pursued?

AI should also be applied to the award suggestions that come out of the optimization model, or models, run against an opportunity to help guide the user on how to pursue an event. If the models would still keep some award with an incumbent, but the award would lower, and the organization is buying a lot of volume across product lines or categories, and the trigger for analyzing the opportunity was that average market cost had dropped (due to a drop in raw material costs) then it’s likely that a good first step should be to approach the incumbents and ask if they will accept a renewal at the average market discount. If you can get 3% savings without all the time and effort that an event requires when the most you’d ever get going to market is predicted to be 4%, going to market for 1% might not be worth it when you consider switching costs and the potential relationship damage with a vendor that supplies other product lines.

The AI should run models on (anonymized) past events (with documented strategies) across the organization (and, if available, the community) to predict when renegotiation should be tried, when the organization should go back to market with a multi-round RFX, and when the organization could even go to an (optimization-backed) auction as the risk/quality/etc. are all similar from all the potential providers and the market will bear a competitive auction.

What models should be used?

You should never, ever, ever make an award after running just one model — no matter how long you slaved over it or how complete you think it is. Simply put, there is no perfect model. It is only perfect under the set of assumptions you made for it, not all of which will be accurate, or even reasonable — and it’s only by running multiple models with and without different constraints and relaxations thereof will you come to truly understand how much a constraint costs you, how much it affects award, how much it affects risk (for good or bad), and so on and whether your expectations of impact and cost of impact and benefit of impact are realistic.

And just running an unconstrained, incumbent, capacity / allocation only, etc. model is not always enough. You need to run controlled variations of the “perfect” model you created, but it’s hard to figure out which controlled variations to run, especially if you have dozens of constraints. A good AI will analyze the model, similar models for similar events, and, taking into account organizational policies and limiting constraints, suggest the best variations to create, run and compare side-by-side when trying to come up with an alternate award.

In other words, while AI won’t be used to find a better solution, as the solver needs to be based on sound and complete mathematical algorithms and there are not many optimization algorithms that meet this requirement, it will be used to construct better models with constraints that more precisely monitor organizational goals and cost and parametric data that is more likely to be accurate and complete, as it will have gone through more validations and verifications.