Procurement Information Architecture Part 3: Analytics

Breaking up the analytics solution for greater design flexibility

Using a multi-part analytics architecture helps break the IT/Procurement logjam and provide more flexibility in vendor selections

In the first installment of Procurement Information Architecture Part 3: Analytics, we discussed the role of analytics in the overall procurement information architecture.

In this second installment, we will discuss the value of breaking the analytics technology stack into different building blocks so that procurement and IT can be better satisfied in their individual goals. Although spend analysis is not even 10% of the overall supply analytics footprint, it is a good illustration of this concept. Here’s my ‘dummies guide to procurement analytics architecture’ in three layers:

Part 1: The supply data warehouse. Or, a place for your stuff.

This starts with defining the ‘analytic data model’ (including master data, transaction data, external content, etc.) that is needed to provide the analytics to support the specific decisions that need to be made. It takes many forms:

  • In basic ‘spend analysis’, it is the ‘spend cubes’ holding transaction history from AP data, PO data, G/L data, P-card feeds, supplier master data, etc., but can also include budget status, external supplier IDs (e.g., DUNS number), parent-child data, diversity status, etc.
  • In supply base analysis, the added dimensions of supplier types, risk types, spend types, regulations, supplier KPI types, etc. (pulled from internal systems and external content stores) would be added.
  • In working capital analysis, payment terms (i.e., discount rate, net payment date, INCOTERM), external supplier data (e.g., average DSO), etc. would come into play.
  • In strategic sourcing analyses beyond spend analysis, the data starts broadening to cost types, material types, commodity codes, market indices, item codes, and a variety of user-defined (or system derived) codes for market complexity, category impact, project-specific attributes, etc.
  • Dozens more

*Note that the analytic data model should not just be derived based on what the various vendors have in place, but also, and perhaps primarily, the efficient and effective business decisions that need to be made. That said, procurement information architects should use the data models from packaged applications to help paint the “art of the possible” in terms of broader analysis (e.g., starting to bring in cross reference files such as item-category-supplier to properly the ‘many-to-many’ data relationships that exist within procurement). For companies running Oracle business applications, the use of Daily Business Intelligence brings this concept to life by taking a snapshot of the production applications and putting them into an analytic data store with packaged reporting sitting on top.

A data warehouse should not be synonymous with a massive IT-driven money pit that is unwieldy in the hands of power users and more casual users alike. But, like the comedian George Carlin said, “you need a place for your stuff”, and IT plays a role in building ‘the plumbing’ from the numerous source systems into the warehouse even if it is essentially a staging area for other applications to do something more useful with the data than just ‘slice and dice’ it. So, after the data has been aggregated, it has to be transformed so that it can be analyzed properly to derive value.

Part 2: Data Transformation…with a big “T”

Once you have aggregated your data from the numerous source systems in use, you then need to wrangle it into shape for the appropriate analyses. Of course, this transformation is way easier said than done. Note that when I say “transformation” it is not the low-level data conversion step that is the “T” in “ETL” (Extract, Transform, Load) where source data is pre-processed from operational data stores before going into the data warehouse.

The ‘transformation’ term is the post-aggregation work where the data is cleansed, de-duplicated, enriched with external content, and “harmonized” (e.g., cross-referenced and properly structured/related such as in the case of parent-child relations) before it can be properly analyzed. You can certainly try to analyze the data before it is transformed, but we don't recommend it other than to highlight the level of source data sparseness, toxicity, etc. that has to be remedied.

*Note, analytics and MDM (Master Data Management) are obviously terrific bedfellows. Analytic projects will quickly highlight your MDM ‘opportunities’ and many analytic applications can even do a fair amount of MDM natively within their applications to set up a virtual system of reference for the master data and to create more complex data relationships without having to change the simpler/inadequate data model of the source systems.

For example, spend analysis vendor Zycus (now a full suite strategic spend management application) created a sister company called Zynapse to use and extend many of the core technologies from its spend analysis solutions (i.e., taxonomy management, auto-classification) to its MDM product.

A broader example is Informatica – a vendor that went from an analytics-focused Business Intelligence focus to data integration and MDM. Conversely, master data centric vendors in SIM, CLM, catalog management, etc. will find salvation not in low-level data synchronization and workflow, but the analytics that provide much higher business impact, including areas such as next generation search/discovery tools which are essentially a form of a data mining analytic application with a flexible user-friendly front-end.

Within this transformation layer sits many vendors across a spectrum of content players (e.g., D&B, Lexis-Nexis, Cortera, Bureau van Dijk, CVM Solutions, Panjiva, and many others) and the analytics vendors (or larger vendors that have analytic modules/capabilities) such as the spend analysis vendors who provide the auto-classification capabilities (whether rules-based or pattern-matching statistically based) needed to properly categorize line-item transaction data into a category/commodity taxonomy. Such vendors include Zycus, Ariba, Emptoris (IBM), Bravo Solutions, SAP, Spend Radar (SciQuest), Rosslyn Analytics, and others.

SAP is interesting in that it is looking to tap the collective power of its procurement installed base and find opportunities in supplier risk mitigation and other areas via its Supplier InfoNet solution that aggregates supplier performance data across multiple buyers and marries it up to external content and to its data enrichment and predictive performance algorithms to provide insight not available just within a single company. This was the vision of OpenRatings (returned to D&B) over a decade ago, and it’s no surprise that some of its top product people are now in this area at SAP.

Part 3: Data Analysis… beyond the BASS system

The third part of analytics is essentially the moment of truth - where the various users determine whether the application is providing insights needed for reporting, discovery, simulation, etc. The idea is to provide mass democratization of the analytics to all the potential users who can have an impact and to give them the horsepower they need to truly uncover the opportunities latent within the data. “Big Ass SpreadSheets” (the BASS system) and simple databases (e.g., MS Access) are certainly democratized, but not as powerful as purpose-built analytic applications with strong OLAP capabilities. In spend analysis, the best example of such an application is BIQ which is now part of Opera Solutions.

And even ERP vendors are getting into the game. Oracle is a great example with its acquisition of increasing integration of Endeca, which provides the multi-dimensional taxonomy management, search/discovery, and ‘drill around’ capability that is key at this highest-level portion of the analytics technology stack.

Putting it all together – apply sourcing principles to create your optimal market basket for supply analytics

The whole point in defining the supply analytics area in the three buckets above is to provide flexibility in how you mix and match solution approaches and individual vendors. Don’t create one giant ‘market basket’ of vendors and choose a ‘winner take all’ approach. Put all the applicable vendors in the hopper and then see which ones optimize the total end user priority (Procurement, IT, and other stakeholders) for time-to-benefit, strategic priority, adherence to standards, cost, etc.

For companies with many back office systems and minimal IT standardization who are early in their procurement journey, cloud-based vendors running the entire stack at a low price point makes a lot of sense. For others who have big ERP and BI infrastructures, they might want to have one of those big vendors run the data warehouse, but then layer on the next two levels with a best of breed provider. Also influencing the mix are other best of breed systems, the complexity and importance of the strategic analytics needed, the budget/time-to-value tradeoff, and other factors. For a discussion of your particular needs, please contact the authors directly.

Pulling back a bit, analytics are an interesting proposition. They are the most strategic of applications, and require both strong technical aptitude (e.g., MDM, integration) and leadership fortitude because they have no explicit hard ROI (i.e., it’s all ‘option value’). However, they are a perfect place to start because they are the simplest form of integration and are a ‘composite application’ of sorts. They support the principle of ‘loosely couple your applications, but tightly integrate the data’. In other words, you can get basic analytics up and running fairly quickly, and then use that to highlight the sins of your data (and your processes and KPIs) that can then self-fund subsequent improvements and broader analytics.

So, analytics are obviously key components of a broader procurement architecture discussion, and as mentioned before, are highly dependent on strong MDM capability. In part 4 of this series, we will investigate the MDM aspect of the procurement architecture and different ways to improve MDM capabilities in order to justify the investment.

Discuss this:

Your email address will not be published. Required fields are marked *