Peter Marshall of Commerce Decisions – Do We Understand Our Tendering Processes? (Part 2)

Yesterday we featured the first part of our interview with Peter Marshall of software firm Commerce Decisions. Today we will look at his views on evaluation tender methodologies.

So Peter, what is the problem here?

“Basically we are using techniques we don't understand and we can then get undesirable results. For instance, we have seen tenders where the evaluation methodology in fact means that the contracting authority is willing to pay twice as much - or more - for an “excellent” solution compared to an “acceptable” solution.”

“Now that may be right, but if you ask most budget holders they might say I will pay 20% more - but not twice as much. Yet that is the implication of the system they are using. This confusion comes because often the users don't understand the implications and effects of the evaluation schemes. And the related but separate problem is that often these methods don’t even rate the bids consistently and reliably – we can end up with results that are almost arbitrary”.

Can you give an example of that? I’ve certainly seen different scoring processes that can lead to different supplier choices.

“Yes, this is one I talked about at the eWorld session. Let's say we run a competition between two suppliers – call then Green and Red. We evaluate the tenders on cost, weighted at 40%, and technical factors, weighted at 60%, as the two top level criteria. Here are the results.

“The cost scores are calculated by taking the ratio of cost compared to the cheapest bid – a very common methodology (cheapest bid / each other bid). So in this case, Red just wins as a lower quality but significantly cheaper bid”.

Technical Cost Cost score Total score
Green 54/60 £8 M 25 79
Red 41/60 £5 M 40 81

 

OK, that all looks fine...

“But now, we discover that another tender which we though had arrived late was in fact submitted on time. We need to score that too - the Blue bid. So Blue scored 32 on Technical and came in at £4 million. Applying the same formulae, this is how the scores now look – the other bidders scores for cost change because they are calculated against the lower Blue bid”.

Technical Cost Cost score Total score
Green 54/60 £8 M 20 74
Red 41/60 £5 M 32 73
Blue 32/60 £4 M 40 72

 

So Blue does not win – but merely introducing that 3rd bid changes the decision from Red to Green!

“Exactly! So how can we say this is achieving best value for money, or is a fair process, if the ranking is as arbitrary as this? The marking of one bid depends on the price of another – even if that bid is not a strong one overall.”

I have also constructed examples a bit like this. But why is it that these dodgy methodologies are still used so often?

“The good news is that suppliers understand this even less well than contracting authorities. So we don’t see many challenges to buyer decisions. What tends to happen, I believe, is that suppliers pitch their bids around what they believe the budgets to be. But because they can’t know how important cost is going to be in the marking scheme, that can’t truly “optimise” their bids in terms of real VFM. What we get if we are lucky are the best bids that are affordable in the supplier’s’ eyes. These evaluation processes are so obtuse, we don’t get optimal bids because suppliers don’t understand what we mean by VFM – so they can’t offer it”.

So what’s the answer?

“We need to be more transparent on scoring, and be clear about how we value cost against the other evaluation factors. That will help suppliers offer real value, and take away the illogicality of their score being dependent on other bidders”.

“At Commerce Decisions we work with our clients to implement different ways of combining the technical scores and prices to reliably rank the tenders based on their perceived value for money. We call our method “Relative Value For Money” or RVFM”.

Many thanks to Peter Marshall – and if you want to get into these areas in more detail, you can contact him via info@cd.qinetiq.com.

Voices (2)

  1. bitter and twisted:

    What empirical evidence is there that tendering actually works?

    Why not just call in the stakeholders favorite and beat them up with the cheapest serious alternative ?

    Remember – I never joke.

    The stakeholders get their way. The beancounters are placated.

    ‘Cos lets face it. Procurement is actually all about The Specification, and Big Buys are fundamentally about Avoiding Crap.

    (btw ‘b+t’ is my ‘fumbling on the mobile handle)

  2. Nick @ Market Dojo:

    I hope to see the back of the erroneous (cheapest bid / other bid) mechanism. It creates non-linear results and anomalies like the example given. Completely agree that no one using the mechanism seems to understand what issues they are getting themselves into.

    We’ve written up flaws on this mechanism and some of the other popular types that exist:

    http://blog.marketdojo.com/2014/02/tender-evaluation-and-linearity.html

Discuss this:

Your email address will not be published. Required fields are marked *