Is Government Making the Wrong Buying Decisions?

government spending weighting

We are pleased to publish this post from Alan Day, founder and chairman, State of Flux  (global procurement consultancy and SRM software provider), looking at what's behind the government's flawed buying methods.

The UK government puts a lot of effort into ensuring their buying process is fair and auditable. There are guidelines issued, legal processes to be followed and mystery shopping help-lines for when the buying process is considered unfair. Despite the intent to use public money appropriately and efforts being taken to conduct purchasing in a fair and structured way, there is a major (and fundamental) flaw.

We think all of the above is well and good, however, it’s being undermined by the selection process, or, more specifically, by not conducting their scoring and question weighting correctly, which is potentially leading to the wrong buying decisions being made.

Back in 2005 we wrote the article 'It's worth weighting for! How purchasers are making incorrect decisions.’ Recently, we were made aware of some of the government scoring/question weighting practices which made us re-visit this article and test whether what we said then still stands true. The article looks into the main, and recurring, error made across the procurement profession of assigning weights to criteria based on their absolute importance to the purchaser.
The aim of any request for proposal (RFP) is to compare the relative suitability of suppliers against a set of weighted criteria and select the best supplier using an objective, systematic approach. But here is the problem: before the weighting process begins the procurement team should be asking themselves the question 'How big is the difference between suppliers, and how much do we care about that difference?' Decision analysis practitioners refer to this technique as swing weighting. This process takes into account the value added in each criterion between the best-performing and worst-performing supplier for that individual criterion. The weight is assigned according to the value added, rather than the absolute importance. The concept of value contribution of a given criterion is a critical factor in the world of multi-criteria decision making.

Getting the weighting wrong

To illustrate the issue, consider the following example:

Let's say you are running an RFP and to drive the selection process you have two key areas that are being considered - technical ability to deliver against your specification and price. Technical ability is important to you so you've decided to weight it as 60% of the total score and of course price is always important (I'll come back to this later) so this is weighted with the remaining 40%.

For the scoring of the price element you decide to take the bid offering the best price, award this the full marks available and then the other bidders’ prices get allocated scores that are relative to their position versus the best price. To make your life easier anyone with more than two standard deviations from the best price gets a score of zero.

The challenge with this example is that the price element, although being ‘only’ 40%, actually plays a much bigger part in the decision making process. Because the remaining 60% covers the rest of the technical criteria each sub-element percentage actually ends up being quite a small element. Think about it, within that technical criteria you need to cover items such as: quality, service, technical fit, team and account management, CSR, compliance to standards and so on. Because the sub-elements are small, the relative difference between each is tiny. So a company with a high score on technical fit and quality may still only score a few percent above an organisation with a low score.

Compare this with price, and because this is set at 40%, any price differences here will actually cause a large difference in the percentage score, making this the key criteria around which the decision is made. Ironically these differences in price can be relatively small but based on the weightings these small price differences become large percentages and when added to the other criteria become the dominate factor or key criteria around which the decision is made (or swings, hence called the ‘swing weighting’).

Compounding the problem

Getting the weighting right is only part of the challenge of objectively making the best decision. We have become aware of a government practice that allocates a bidder a ‘zero’ score for price if their pricing is more than two standard deviations from the mean (average) pricing of all the bidders. This causes the following problems:

  • The average can easily be significantly skewed by a bidder offering a very low price.
  • If the difference in pricing between suppliers is small, good suppliers may be excluded on very small amounts of money.
  • It puts even more bias on price.

Above all, it’s lazy, because it doesn’t take into consideration a ‘should cost’ figure, so as buyers we haven’t taken the time to work out what the product or service should cost and build our scoring model around this. Unless a supplier has failed to submit pricing with their bid, no one should receive a zero score for price.

We've long been a urging procurement to look at how we run RFPs or even challenge the need for using them at all (see our article ‘is the RFP the wrong tool for the job?’) but we do think if an RFP is going to be used then at the very least it must be fair, unbiased and within an auditable process that is focused on making the right decision for the business.

From what we have seen, we believe that government purchasing needs to be trained on how to correctly weight and score RFPs. If this practice doesn’t change then the answer is pretty simple: to win work with government, forget about factors like technical ability, quality and service levels and just make sure you keep the price low. Nothing else is going to matter.

Voices (5)

  1. RJ:

    Excellent summary of the problem and, whilst exacerbated in public procurement by the need to determine and publish criteria in advance, the error is just as prevalent in private sector procurement.

    I have long been convinced of the fallacy created by scoring each individual question of an RFP with weighted values. Aside from really poor submissions this has a general tendency to conflate scoring so that the “winning” bid might score 74.7%, while numbers 2 and 3 score 74.3% and 73.8% despite being obviously poorer submissions.

    Far better, as b+t points out, to work out your qualification criteria first, score those on a pass/fail, identify the key differentiating factors in a bid at a reasonably high level and score on a does not meet/meets/exceeds requirements basis and keep the cost equation completely separate. This then prompts a discussion around issues like “is an extra 2% on quality worth the extra £150k cost?”

    …and don’t even get me started on how erroneous the scoring can work out (or how it simply tends to be manipulated) when you try to apply it to less tangible categories like professional services or marketing!

  2. bitter and twisted:

    Instead of assigning a score to a price, you should assign a price to the non-price components. And if you cant assign a £value, whats the point?

    I mean: youre tendering for the canteen – surely, Spumco Catering’s commitment to dolphin friendly tuna is either a) worth an extra x pence per person per day or b) priceless and should be part of the qualification criteria or c) worthless and not there at all.

  3. Dan:

    “before the weighting process begins the procurement team should be asking themselves the question ‘How big is the difference between suppliers, and how much do we care about that difference?’ ”

    Except in public procurement the criteria and weightings have to be decided right at the start before you even invite expressions of interest, not when you’ve received the bids.

    I have doubts about your maths as well, although I’ll quite happily admit that its not my strongest subject and that I could just be being a bit thick!

    I accept that having lots of smaller sub-criteria dilutes the scores and means that the score difference between suppliers can be very small for each sub-criteria. However, surely those small differences adds up to a large difference over all the sub-criteria? It only changes where you would score the one criteria of 60% differently to the 6 sub-criteria of 10% each.

    As far as price goes, the standard differential model means that a small difference in price leads to a small difference in score. That only changes where you use a method such as scoring the highest price zero, and allocating the remaining prices on that scale in relation to the highest and lowest prices – a methodology that has been discredited and should never be used.

    Like I said, I could be misunderstanding this totally, and would love to see a more robust example to show your working. I know Peter has his doubts about the current methods used for evaluating tenders!

    1. Alan:

      Dan, thanks for the comments.

      I agree with the comment about when the weighting should be assigned and in fact in all good procurement practice the weighting should be decided up front. The article was about how the weighting was scored and the maths behind it.

      I hope you don’t mind me saying but a lot of the points you are making are further highlighting the problem. The small weighting and scoring of sub components means that what may ‘swing’ the decision gets drowned out by lots of other small scores or gets incorrectly overshadowed by a large weighted item such as price which is what has happened in the example I gave.

      There is a need for education on how to weight and score RFP’s to avoid the poor practice and ultimately poor decision making. I’m surprised one of the large government outsourcing organisations haven’t challenged this yet but maybe that will provide the initiative for government to change.

  4. bitter and twisted:

    How about an internal auction to find out the real value of technical excellence.

Discuss this:

Your email address will not be published. Required fields are marked *