More on evaluating tenders (part 4) – why a common method for scoring price is flawed

We asked in part 3 of our series on tender evaluation -  what are the flaws with the methodology for scoring price that many organisations are now using (taking the cheapest bid as 100 points and scoring the rest based on their percentage variation from that)?

Here are a few...

1. Possible reliance on a “rogue” bid

The cheapest bid might be unfeasibly low, and / or relate to a bid that otherwise is well below the required quality standards. So Supplier Z bids £100. The next cheapest bid is £180 from Supplier A, while supplier B is £198.

So Z scores 100 points. A scores 20 points (being 80% higher), and B scores just 2 points.

But if Z was taken out of the picture, which it arguably should be if their  bid is basically rubbish, then A would score 100, and B would score 90 (being 10% more expensive than A).

So including Z has done two things. It has made the effective weighting on price much lower, because the A and B scores are low compared to the likely scores in the non-price area. And it has also distorted the differential between A and B, making it an 18 points difference rather than the 10 if they were the only two being considered.

2. Potential for  zero scores

All bids that are more than twice the lowest price score zero. So that cannot be logical under the “most economically advantageous” regulatory definition. In this case, a bid of £300 scores the same as a bid of £200 i.e. zero. That does not reflect economic advantage properly .

While that may rarely occur, I have seen it happen. And arguably it might persuade a judge, if it came to a challenge, that the whole methodology was flawed.

3. Inconsistency compared to non-price scores

My price score depends on other bidders. That is not the case with non-price, where marks are given against a notional but external scale – so my score is independent of other bidders. Making price score dependent on others is inconsistent and illogical – my price must be “worth” a certain score (or utility) to the bidder, whatever others might propose.

So, if we don't like this method, what do we recommend? Well, one option is this. Work out the lowest price you think is remotely possible. Basically a price that, if someone bid lower, you wouldn’t believe them and would look for an explanation (unfeasibly low bidding). Then consider what would be the highest price you could possibly afford – that scores zero.  (I think there is a logic to say anything above that price then is disqualified from the bid as unaffordable, but I’d like to take legal advice on that idea as I haven’t tried it in real life tenders I must admit)!

So you might in this case above decide that £50 is the lowest feasible price and £250 the highest .

When the bids come in, we place the bids on that scale.  So Supplier Z scores 75 marks. Supplier A scores 36 marks and Supplier B scores 26.

That seems more reasonable, and avoids the logical flaws we highlight above. It requires some work in advance – but also has the benefit of giving the suppliers a clear view of our price expectations.

Anyway, I hope we’ve made you think about this issue – there is lots more that could be said, and I suspect we’ll come back to it in the future.  And we’ll have our final (for now) part in this series tomorrow.

Voices (3)

  1. life:

    100% lowest and 0% highest, with others indexed between the two. Doesn’t answer all weaknesses above (every method will have some) and only used where there’s a larger response (so no “binary” results from two runners).

    2 and 3 are very relevant but I’m not sure conclusions on 1 necessarily stack up. Thanks for very interesting series of pieces…

  2. Rob:

    There are better relative scoring systems currently in use. There is also a ‘two-stage, averaged, relative scoring system’ which is very effective.

    Often, it is best to identify whether the solution you are sourcing is, for example, fairly commoditised from a mature market or is novel/innovative from an mature/emerging/new market. This also needs to be carefully overlayed with an assessment of the some of the key differentiators being procured, which includes, again, for example, ‘intellectual capital’ versus ‘process’. Think about this: executive headhunting (which is used to identify and attract the right critical employees, first time) versus a recuitment service (which you might use to enable you to efficiently, transactionally access a relevant but fluctuating database of potential employees, for an agreed fee).

  3. Plan Bee:

    The method you describe is also mathematically unsound, especially if the other factors are not evaluated in the same way.

    Other options are to use the budget or the previous price paid as the baseline…….20% below the baseline gives you a score of 120.

Discuss this:

Your email address will not be published. Required fields are marked *