In Shade 2 of this spend analysis series, we looked at analyzing total cash disbursements to suppliers through the lens of basic A/P data. In doing so, we touched on the issue of data quality, and in the particular, the data coming from the supplier master file (or vendor master if you use that terminology). Obviously, if the supplier master data is bad (i.e., dirty, sparse, duplicated, non-standardized, etc.), the spend data will show it. But, the highlighting of bad data isn’t about improving data hygiene unto itself, but rather is about fixing the data problem to highlight value creation opportunities that you didn’t see before. The most frequent example of this is supplier master duplication where multiple supplier records exist for the same supplier.
When you find duplicate supplier master records, you can obviously begin to see where there is additional volume leverage that you can gain within strategic sourcing. This is a key capability for justifying the ROI of investing to get to this level of capability. However, spend analysis is not just about feeding the strategic sourcing process! When you find duplicate supplier master records for the same supplier, it can lead to a whole slew of root causes that should be addressed. This includes a lack of clarity/controls in the supplier master setup process (e.g., who can add/change/delete what fields in the supplier master file) or poor “supplier discovery” inquiry capabilities of suppliers from your own existing supplier network.
Up until now in this series, much of what we have talked about can be done on your own, albeit inefficiently, but in the area of data de-duplication, cleansing, enriching, auto-classification, and harmonization, the tools can really help. But, this area is also where supplier content providers of many forms can be used (i.e., content firms, MDM providers, supplier/business networks, analytics vendors, supplier management application providers, procurement suite providers, etc.). Such firms can also assist in the de-duplication of effort using a combination of fuzzy logic (pattern matching), proprietary databases, and rules-based analyzers to help with this key task.