Beyond Supplier Information Management: The Emerging Analytics Challenge

Over the past few months, I've spent an inordinate amount of time interviewing a combination of practitioners, consultants, software experts and even professionals (lawyers, CPAs, etc.) for a number of research projects focused around the macro topic of how we relate to, manage and act on supplier information. The primary focus of much of my probing has been the gathering and management of external information vs. internal. This type of data can be both unstructured and structured, can come directly from suppliers or from third parties and is often both qualitative and quantitative in nature. In other words, if gathered successfully, it can create a perfect storm leading to data overload and analysis paralysis. In fact, I've been in a couple of meetings recently where I tossed out a scenario that in a few years time, we might see the majority of the world's top companies gathering significant CSR information on their supply chain, but that they'll have only a limited ability to analyze it and act on it. Such a future event or milestone is not too far fetched.

The fundamental problem as I see it is that our ability to aggregate, share, analyze and visualize information is not keeping up with our ability to gather and store it. Witness, for example, the booming interest in supplier information and supply risk providers of late. I get non-stop questions from companies and consultants around who they should look at and consider in this emerging area (providers include Aravo, CVM Solutions, AECSoft, D&B, Ariba, SupplierForce, Hiperos and Xcitec). There's lots of reasonable -- if not good -- choices to consider in this arena. But do any of these providers truly excel at data aggregation, analysis and visualization? The short answer is no -- except, perhaps, some of D&B's dashboards. But even these are limited to specific areas. Moreover, they are barely at the level at which we'll need to look at information in the future, taking into account the need to continuously consider new data sets, not to mention incorporating them into existing analyses.

As procurement becomes more of a knowledge management function going forward, becoming the keeper of a range of internal and external information including commodity market analyses, supply risk, etc., we'll need to realize that our capacity to aggregate internal information (e.g., systems data, qualitative insights from scorecards, aggregate spend analysis information) with a variety of external and third-party data feeds on as much of a real-time basis as possible will remain a challenge. From a business standpoint, those who do this right will be able to greatly improve how they interact with suppliers and save the business money (not to mention reducing risk). Need an example about how real-time access to information is critical? Consider the ability to steer front-line users to make the best total cost decisions. Here I'm not just talking about from a requisitioning standpoint (including such areas a warranties) but also make vs. buy decisions, who should own different risk elements in a contract, etc.

Heady and useful stuff you say, but to make this possible will require a range of new and emerging analytics and visualization capabilities that either sit on top of or are embedded in our decision support and transaction platforms. These might come from traditional providers heavily invested in BI (and trying to adopt these platforms to what we'll need in the future). Oracle and SAP fall into this camp. And we might also turn to customer/supplier data integration and presentation specialists such as Purisma (D&B), Initiate Systems and Silver Creek Systems who can work with giant datasets on a real-time and query-based basis. And without question, we'll also need access to a powerful set of analytical tools (e.g., BIQ) built for both power users, as well as other tools aimed at regular business users. Front-line visualization solutions (e.g., Endeca) that flexibly aggregate and deliver structured and unstructured content to business users, allowing for the discovery of new insights and interrelationships between datasets, will also be critical.

My guess is that most companies won't come to the conclusion that they have a top-layer data analysis problem until they figure out all the information that they're beginning to collect and aggregate. Maybe this will take years. But when it does happen, we'll see a field day of investments designed to make sense of it all. And in the meantime, perhaps we'll see existing providers in the sector end up organically developing and/or plugging-in new analytics and visualization capabilities on top of what they're already offering.

- Jason Busch

Discuss this:

Your email address will not be published. Required fields are marked *