Data as the ‘Alpha and the Omega’ of Artificial Intelligence: Basware’s Eric Wilson Makes the Case
09/26/2017
Spend Matters’ digital pages are no strangers to coverage of big data and artificial intelligence as they pertain to procurement over the past couple years. But while the exact degree of interplay between the two may be up for debate, what’s not is that data and AI are still the hot topic of the moment.
Eric Wilson, vice president of P2P at Basware, will be taking up that very topic at an upcoming conference this week, and we got the chance to catch up with Wilson for his point of view on two of the biggest buzz-phrases in digital technology.
Spend Matters: What was the way into procurement for you?
Eric Wilson: It’s good to look back. I’ve actually been in the procure-to-pay space now for more than 12 years and came into the procurement world through being hired into the Verian organization, a procure-to-pay player based out at Charlotte. I learned the ropes here and continued to expand through learning from lots and lots of procurement customers and full P2P customers. And then of course with the acquisition by Basware [in April 2016], I joined the world of Basware.
SM: Were you always interested in getting into the field or is that something you fell into, rather than planned out?
EW: Definitely always planned to be in a position to provide high-value enterprise systems to customers, and definitely saw that there was a gap in the procurement world in particular with that. Organizations like Verian were helping to fill that gap. It was an intentional move because we definitely saw an unfulfilled need in the world of e-procurement.
SM: So let’s get to what you plan on talking about at Procurious’ Big Ideas Summit this week. Tell me why data is “the alpha and the omega” of artificial intelligence, as their promo materials put it?
EW: When you’re looking at overall technology trends within procurement systems, ERP systems or whatever kinds of enterprise systems these days, you’re going to have to get your head out of the weeds of individual pieces of functionality and realize that the way of “tomorrow” is to be able to take advantage of these technology advances like AI, machine learning and even beyond that, prescriptive and predictive analytics, and pure benchmarking. But all of those technologies have data as a basic prerequisite — and not just some data, but gobs and gobs of data because these machines can’t offer predictive insights if they don’t have tons of data to consume. They can’t actually prescribe actions unless they have a full set of data on which to make those predictions. That’s why we say the first thing you’d better be thinking about when considering enterprise systems (whether P2P or back-end ERP) is, “Can it and does it capture all of that transactional data necessary, and is it architected in a way that not only allows you to get value from your transactional data, publicly available data, as well as the transactional data from many, many other enterprises because that’s where AI and similar technologies are truly going to provide a lot of value.
SM: So capturing data of all kinds is clearly essential. But where is all this data going to come from? Some would argue that you don’t find much on POs and invoices…
EW: I would argue with the point that there’s not a lot of valuable data on the purchasing side. For example, when you’re able to actually capture 100% of that spend data, whether that’s coming from, say, the upfront purchase orders or from all the invoices flowing through the system, when it’s really able to see all of that spend — not only just indirect spend and the indirect purchase orders coming out of the P2P system but when you have your direct materials, facilities spend, and all kinds of niche vertical market spend running through there, then, yes, you actually do have that treasure trove of information. [In turn], the machines can say, okay, for example, based on your historical purchasing patterns around this particular commodity, and then I look also at this publicly available data around commodity pricing based on seasonality, I as a machine, can tell you, “Hey, this is the time to actually purchase that commodity as opposed to what you were planning to do otherwise.” That data is out there and available if the system is centrally architected to capture all of it.
SM: In your view, what is the crucial thing or set of things necessary to have in place in a given system to ensure that type of capture?
EW: It really comes down to three main criteria.
Number one: can you get all of your suppliers connected to the system? I don’t just mean the big guys of the world, the EDI players, the Graingers and Office Depots, those kinds of guys that can connect to your system, but can you get the smaller, long tail of suppliers? Can you get the folks that are sending in paper? Can you get all of those suppliers connected? If not, you can’t capture that data electronically in terms of the purchase orders to the suppliers and the invoices coming back from them.
Secondly, will all the end users actually use the system? And not because procurement is putting the proverbial gun to their head to do so, but because it happens to be the easiest way to get their job done? You have to have all of your end users actually using the system. Seems like a no-brainer, but sometimes procurement systems seem to forget that.
The third point, which is where most P2P systems actually fall down, is you have to be able to run all of your spend through on the invoicing side; not just those invoices that are matching up against the indirect POs as mentioned earlier, but the ability to bring in 100% of your invoices flowing through the system. Once you’ve done that and evaluated that you got all three of those components in there, then you have the opportunity to take advantage of these technology advances.
SM: With that in mind, assuming an organization can effectively ingest all of the necessary data through these methods, what would you suggest is the best way to go about managing that data?
EW: If you have a provider that is closely connected to the technology trends and to where the market is going, hopefully the management of that data doesn’t become overly difficult for you as a buying organization. The provider should be doing that. That should actually be something that you’re evaluating when looking at these solutions. That data cannot all be in a bunch of disparate places. It’s got to be in one central hub of data that you can easily tap into, either with the analytical tools that are a part of that system or with your own third-party analytical tools like a Tableau or a Qlik, but that’s definitely a great question because capturing all the data is only one part of it — the second thing is to actually make use of it.
But how to make use of all the data fast enough to derive anything truly insightful? And if more machines replace human workers, how should organizations quantify the risks involved? To be continued in Part 2 of this interview.
-
-
EPRO P2P SOURCING10/30/2018
-
P2P12/11/2023
-
EPRO P2P03/13/2017
-
AP/I2P EPRO P2P03/15/2018
-
-
EPRO P2P SOURCING10/30/2018
-
P2P12/11/2023
-
EPRO P2P03/13/2017
-
AP/I2P EPRO P2P03/15/2018