Future of Procurement – Computer Says No …

Picture number: COM/B911217 Description: Wrens operating the 'Colossus' computer, 1943. Colossus was the world's first electronic programmable computer, at Bletchley Park in Bedfordshire. Bletchley Park was the British forces' intelligence centre during WWII, and is where cryptographers deciphered top-secret military communiques between Hitler and his armed forces. The communiques were encrypted in the Lorenz code which the Germans considered unbreakable, but the codebreakers at Bletchley cracked the code with the help of Colossus, and so aided the Allies' victory. Credit: Bletchley Park Trust/Science & Society Picture Library All images reproduced must have the correct credit line. Clients who do not print a credit, or who print an incorrect credit, are charged a 100% surcharge on top of the relevant reproduction fee. Storage of this image in digital archives is not permitted. For further information contact the Science & Society Picture Library on (+44) 207 942 4400.

The TV sketch show Little Britain has dated badly in some respects since it was at its height around 15 years ago. David Walliams has gone on to become a National Treasure and a top selling children’s author, while Matt Lucas is an actor and has been a Doctor Who star in his own right.

But in one sketch, the team were ahead of their time. The “computer says no” sketches showed an uncaring bureaucrat, in an employment office or bank perhaps, refusing a perfectly reasonable request from a customer by simply saying “computer says no”. The jobsworth was not interested in why the computer said no, or in the logical arguments of the affected party. The humour, painful though it was, came from the increasing frustration of that individual set against the bored, detached approach of the computer user.

As we move into a world of increasing AI and machine learning, not least in our world of procurement and supply chain, might we find ourselves getting into that sort of situation increasingly often? We wrote recently about how we can see AI helping procurement through the whole sourcing process. We also hear from solution providers about how they are building risk factors into their systems – so, for instance, the system might suggest to the buyer that a certain supplier is high risk and perhaps should not be used for a particular contract.

But the danger is clear. You can imagine the scenario.

Supplier: I just wanted to know why you didn’t renew our contract last month.

Buyer: Computer says no.

Supplier: But we have supplied you for five years. We know our prices are very competitive, and we give you great service.

Buyer: Computer says no.

Supplier: Well, why does it say no?

Buyer:  Computer says you are high risk.

Supplier: Well what does it mean, high risk? We’re doing fine, money in the bank …

Buyer: Computer says high risk. Computer says don’t buy from you.

Now of course, no procurement professional would ever behave like this… But we need to guard against that potential sort of situation as technology becomes (in theory anyway) smarter and moves into what is effectively decision making. That requires two approaches to be followed by providers and buyers.

Firstly, the development and implementation of the algorithms, the rules, the learning routines that will power this AI need to be informed by procurement professionals and the users of these systems, not just by the software designer.

In other words, it should not be up to a bunch of coders in Mumbai or Manchester to decide which risk parameters or metrics should dictate a “computer says no” outcome to be presented to the buyer. That also means more experienced procurement professionals need to jump over to the solution provider side so they can help inform the developers how this should work.

Secondly, we must have transparency. Both the buyer and the suppliers need to be able to see why the system is suggesting the decisions or outcomes. If the computer says no, then the computer should also be prepared to tell all those impacted exactly why it has said no.

That seems fundamental, by the way, well beyond the world of procurement, and must be a safeguard for citizens generally. Otherwise we get into dystopian science fiction territory – “the computer says I have to arrest you / withdraw your bank account / take your kids into care”? Why? “Because the computer says so”.

So while the future of procurement includes some exciting possibilities around AI and machine learning, we do need to be careful. We want the computer to make suggestions, and tell us why, not just to say no!

Share on Procurious

First Voice

  1. Dkfm. Heinz Pechek:

    Dear Peter, you are completely right, nobody from us know, what and in which way AI will influence all our behavior and decision in the economy and also in Society and politics as well. AI could be a great assistance for mankind but also – I believe – a great danger for society and mankind. AI has no idea of ethical behavior and responsibility.Can we discuss this any more and maybe during one of our conferences for CPOs?
    Yours Heinz

Discuss this:

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.