Back to Hub

The basics of analytics: 8 levels — and the AI leverage

06/18/2020 By


Analytics is hot. In many organizations, analytics has gone from a “nice to have sometime in the future” to a “we need real-time, AI-backed predictive analytics yesterday to stem the flow of red.”

But, as we’ve said before, not all analytics is created equal, and understanding what you are considering and what it can — and cannot — do is becoming more important than ever.

So in this Spend Matters PRO piece we’re going to provide a short refresher on the levels of analytics — what they are, what to expect and what not to expect from each of them.

There are eight levels to analytics, and current solutions fall somewhere in the first seven. The majority offer functionality firmly contained in the first four levels, with only the minority truly offering full Level 5 functionality or higher.

We’ll also review some example functionality to help you understand what is, and isn’t, out there and give you some guidance on how to compare the different platforms (and whether what a vendor is offering is sufficient for your organizational needs).

8 Levels on the Analytics Maturity Curve

(Cllick image to enlarge)

Classificative / Foundational

This level of analytics is used for:

  • data integration and normalization
  • classification against market taxonomies and coding system
  • entity normalization (organization, location, product, etc.)
  • business transaction identification

It is found in systems designed for master data management (MDM), business reporting, and spend analytics. It is used for data management and data preparation. It powers simple reporting on data volume, type and velocity. It does not give any business insights beyond entity counts, transaction counts and simple totals.


This level of analytics is used for:

  • everyday operational reporting
  • simple data exploration
  • benchmarking
  • spend/volume history and trending
  • simple monitoring and alerts

It is found in your standard business reporting, ERP/MRP, S2P and finance systems. It allows for the creation and generation of, typically boxed, reports on monthly spend, monthly inventory movement, monthly invoices and so on. It can create month-over-month, quarter-over-quarter, and year-over-year reports; allows for the definition of simple trend lines; can allow for the definition of monitoring and alerts if a line goes above or below a certain threshold; and can allow the organization to define and create a benchmark against a predefined measurement over time. It’s historical reporting, nothing more. And that’s what the majority of “analytics” solutions offered for the longest time.


This level of analytics is used for:

  • financial & operational performance analysis
  • industry peer benchmarking
  • KPI definition and tracking
  • basic statistical analysis
  • root cause analysis

One degree up, this level of capability is found in modern analytics systems, entry-level data science platforms, and best-of-breed, analytics-backed and optimization-backed finance and sourcing platforms. It helps an organization define KPIs and metrics, understand how well they are doing against industry averages, find statistical anomalies in their data and perform root cause analysis


This level of analytics is used for:

  • predictive trend analysis
  • price & inventory forecasting
  • correlation analysis
  • risk analysis
  • scenario planning

This level of capability — also found in modern analytics systems, entry-level data science platforms and best-of-breed, analytics-backed and optimization-backed finance and sourcing platforms — takes diagnostics to the next level and helps organizations predict what is coming next, what the likelihood is, what the defining data is, and what the risk impact is if the data is wrong or changes unexpectedly. Such tools contain advanced statistical functions and capabilities, advanced (automated) correlation analysis, the ability to predict trends with statistical confidence, the ability to define what-if scenarios against trend variations, and advanced forecasting techniques. This is the level of capability where most modern analytics offerings stop.


This level of analytics is used for:

  • scenario optimization
  • decision support
  • best practice recommendations
  • rules extraction

This level of capability — generally found only in data science platforms and a few select best-of-breed analytics platforms — uses predictive data insight capability to help users make decisions not only about what situations to address but how to address those situations. For example, a prescriptive strategic sourcing platform will use scenario optimization to suggest the award that minimizes cost, or risk, that best matches an organization’s needs while adhering to its constraints. It will analyze millions of award permutations if needed to arrive at this recommendation. A working capital management platform that employs advanced analytics to determine whether early payments, investments or delayed payments to financially sound suppliers will best maximize the organization’s capital (and maintain the capital needed to meet employee obligations) is another example of prescriptive analytics. And a platform that analyzes past awards or payment patterns to extract best practice rules is another example.


This level of analytics is used for:

  • automatic approvals
  • automatic awards
  • automatic payments

This level of capability — found in very few platforms — combines predictive analytics, rules and RPA (robotic process automation) to automate tactical tasks in an organization where decisions are made as the results of analytics. For example, when an invoice matches a PO, and the amount is under a threshold, it should be automatically approved and not waste an AP resource’s time who should be focused on the discrepancies. When an RFQ is for a commodity product, all products are equal, and no service is needed, if the lowest bid is less than market price, then the software — after comparing the bids to historical prices and projected market price based on historical market data — auto-award it. When a payment is due, and it has been approved, and the bank account information has been verified, automatically sending it for payment is another great way to save tactical processing time.


This level of analytics is used for:

  • machine learning
  • natural language processing
  • reasoning and explanations
  • augmented intelligence for highly specialized tasks

This level of capability — the highest capability found in a select few platforms for a select few applications — adds machine learning to the other predictive capabilities and learns from user actions as a result of recommendations to give better, and more accurate, recommendations next time. Instead of applying RPA to automate simple decisions that can be defined by hard-and-fast rules, the system recommends an action and records whether the user takes it, takes a modified form of it or rejects it outright — and then re-runs its predictive models on that new input to make a better suggestion next time. For example, if an invoice matches a PO except for a shipping fee, it doesn’t automatically reject it, it analyzes the shipping against typical shipping rates and recommends acceptance or rejection based on that analysis (while providing typical shipping ranges).

In addition, instead of providing the user with hundreds of predefined reports organized in multiple three-and-four level drill-down levels, such a platform will use natural language interface where a user can ask for something, such as “spend in aluminum over the past three years and projection for the next three years,” presents the user with that report, allows the user to ask “should I renegotiate now or wait six months,” runs the analysis on projected savings now vs projected savings in six months, and recommends whether a sourcing event should happen now or in six months, and displays the data and automated reasoning behind the suggestion. Another platform analyzes shipping data and shipping patterns based on orders, ASNs, shipment dates, receiving dates, weather, supplier financial data, port data, and other related data in a specialized machine learning model and predicts when a shipment is likely going to be late from the time an ASN is issued, allowing a user to make contingency plans in the likely event that happens (as, over time, the model gets more and more accurate to the point where 90%+ of late predictions actually come true).


This hypothetical level of analytics will be used for:

  • automatic scenario analysis
  • automatic process adjustments
  • automatic replenishment and buying off-contract
  • automatic changes to working capital management

This level of analytics — which doesn’t exist yet — builds on “permissive” and “cognitive” analytics to automate low-level strategic tasks that, historically, one didn’t think could be automated. For example, analyzing changes in category spend over time based on demand patterns, raw material costs, energy costs, supply chain risk, and changing the sourcing timeline, supplier mix or even category structure to reduce risk and complexity or better capitalize on market availability and opportunity. Analyzing the working capital availability day-by-day against predictions and determining when pre-approved payment schedules need to change, when investments need to be divested, when early payment programs need to be stepped up, and when the mix of internal headcount vs. contract labor is not optimal and needs to change over the coming months.

Some vendors claim to have these autonomous capabilities today, but they really don’t (or at least don’t for anything beyond a trivial academic scenario that doesn’t solve a real-world problem) — and this is because this level of analytics requires a level of AI we don’t have yet. But we are seeing the hints, and we might get there. However, the question you need to ask when you hear claims of autonomous analytics are not just “what does it really do and how” but “do we really want it”? The real world is not chess. A highly trained supercomputer will eventually outperform all average people and eventually get to be almost as good as an expert — but without true AI (and we probably don’t want that for reasons every Sci-Fi movie has repeatedly explained) it will not beat an expert. And the 1 out of 10 times or 5 out of 100 times it fails, it will fail noticeably, and every once in a while, spectacularly. And when one error can more than wipe out the efficiency and savings from 9, or 95, successes, you really want a human making the final decision and the software doing what it does best — trillions of calculations to provide real fact-based data insight to help the expert make the right decision.

(Let’s face it, the only thing scarier than the political and coronavirus climate right now is the thought of a truly intelligent AI: The good news is that, until AI gets there, we will only have to deal with extreme weirdness: