Tenuous Link to World Cup Procurement Post – How to Score Tenders

OK, we apologise, we have re-scheduled a couple of this week’s articles so we can feature something about the World Cup. Well, if we can’t do that this week, then when can we?

Today, we’ll look at some issues around supplier selection - scoring the bids and tenders that we receive from potential suppliers.

England made it through the World Cup quarter finals the other day, as you will know unless you have been asleep all week. Jordan Pickford, the young England and ex-Sunderland goalkeeper (who went to school two miles away from my junior school) was brilliant, making three great saves, at least one of which was truly brilliant. He also passed the ball well, starting England’s attacks and generally playing as far as I could see a perfect game.

So with all the websites now giving scores to players, how did he do? Surely, this was a case of a clear-cut 10 out of 10? Really, how could he have been better? Only perhaps if he had run out of his goal, taken the ball past six Swedish players and crashed a swerving 30 yard shot into the top corner!

Realistically, this should have been a 10 out of 10, 100% performance. No goalkeeper could have done better. But no,  he was marked as a 9 or even an 8 by various websites. The BBC audience vote gave him a grudging average of 8.41.

This demonstrates two issues, which luckily for us have a read-across to procurement. Firstly, we hate giving anything that we are assessing a 100% mark, and the higher the absolute number is, the less likely we are to nominate it as our score. So we might just about give a hotel 5 out of 5 on TripAdvisor, but we would never give anything 100 out of 100, I suspect.

Secondly, we find it very difficult to distinguish between more than about 5 or 6 different levels or grades when we are marking most things subjectively. Maybe, just maybe, wine guru Robert Parker can tell the difference between a 93 out of 100 claret in his scoring and a 94 out of 100 - but frankly I doubt it. Certainly, if you aren’t a super-deep expert in your subject, you are unlikely to be able to explain even why the wine was an 8 rather than a 9, or indeed why Lingard was 7 out of 10 and Dele Ali was an 8.

So let’s bring this back to scoring tenders, or the questions within them to be precise. We’re assuming you are breaking down the evaluation into a number of evaluation criteria and questions relating to those. And if you are not doing that, then you need to look at some fundamental issues around your process, frankly.

Given the difficulty of differentiating, and that abhorrence people have when it comes to 100% marks, we strongly recommend that you use a scoring system with no more than 6 levels. Do that, and it is not too difficult for your markers to see an answer as a 5 out of 5, a lot easier that 10 out of 10 or 100 out of 100 for sure. And if you use a scale like this, you can explain pretty clearly to internal stakeholders and the bidders how you made your scoring decision.  You can make a pretty clear differentiation between a score of 3 and one of 3 or 4. Maybe like this.

Score Description
5 An excellent answer, indicating a response to this question that fully meets our  needs and requirements with no weaknesses or issues.
4 A good answer, indicating a response to this question that generally meets our needs and requirements, with only very minor weaknesses or issues.
3 A satisfactory answer, indicating a response to this question that meets our basic needs and requirements but which demonstrates tangible weaknesses or requires some minor compromises from our organisation.
2 A poor answer, indicating a response to this question that fails to meet some of our  basic needs and requirements, and which demonstrates significant weaknesses or requires major compromises from our organisation.
1 A very poor answer, indicating a response to the question that fails to meet our basic needs and requirements, or requires an unacceptable compromise.
0 No answer or totally irrelevant response.

 

You can tweak this of course – some include an element of “added value” in order to score a 5 out of 5, but the point is that such a mark should be achievable, not an unattainable ideal. And six marking points is quite enough I have found in practice to differentiate between bids. But let’s face it, the football player ratings wouldn’t be as much fun if they were out of 5 rather than 10, even if it is a sub-optimal process!

 

Voices (3)

  1. bitter and twisted:

    I see a problem in your scale. Scores of 0/1/2 are unsatisfactory and the tender should be disqualified, not just scored low.

    1. Dan:

      What if the question is only worth a tiny amount of the overall score? Say 5%, and the scored 4/5 for the other 95%?

    2. Peter Smith:

      You can build that into the scoring but 0/1/2 scores do not ALWAYS lead to disqualification, it depends on the question. “We reserve the right to eliminate any supplier who scores 0 or 1 on question 1,2,4 and 6” is the sort of comment I might include. but not every question or evaluation criterion is absolutely mission critical. They might answer poorly on social value for instance in a public sector contract but you would think “we can work with them to make sure they do more / better in that area once we are working together” if every other part of their bid is excellent.

Discuss this:

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.