A strange 100 metre race
Imagine you are a passionate runner and manage to cross the finish line first in a race. This is not just any race; you have been looking forward to it for a while, trained and worked for it. Immediately after you start celebrating your victory, you learn that one runner was doped and thus disqualified. Why would that affect you, you might think, but you are promptly informed that you are now second – someone else has won the race.
Wait… what? To be fair, this sounds absurd. However, something similar can occur when using a certain evaluation methodology used in Procurement tenders, which is particularly popular with organisations in the public sector.
The ‘point system’ evaluation methodology
To illustrate this, let us use the imaginary example of a buyer (whom we will call Bryan), whose task it is to choose a supplier that provides the entire vehicle fleet for his organisation over the next four years.
He therefore diligently lists all relevant awarding criteria, such as the quoted vehicle price, the technical specifications, maintenance cost, sustainability, and so on. To ensure that he reflects every criterion appropriately in the awarding decision, he chooses to weigh the criteria accordingly:
In this example, the quality scores are a result of an evaluation in which each supplier can get a maximum total score of 60 points.
Moreover, to trade off cost and quality considerations, he also assigns a score to each supplier’s cost, which is calculated as follows:
This means, the supplier with the lowest total cost receives the maximum score of 40, whereas all other suppliers receive a lower score depending on how their total cost relates to the cheapest supplier. For example, a bidder with twice the cost of the cheapest supplier would receive a score of 20.
Initial ranking
In our example, after evaluating the quality and cost submitted, the resulting ranking is as follows:
As we can see, C offers the lowest cost of £225m, thus receiving a full cost score of 40. Supplier A’s score is 40 x 225/300 = 30, and Supplier B’s score is 40 x 225/600 = 15.
Based on this comprehensive evaluation matrix, Supplier B provides the best offer. Therefore, with the task of choosing the best supplier being done, Bryan calls B’s key account manager and congratulates them for being the new vehicle fleet supplier, promising to draft and send over the contract within the next few days*.
Updates on an ‘irrelevant bidder’
However, the next day, Bryan receives a call from the Program Manager, telling him that the engineers have found some major security issues in Supplier C’s offered vehicles, which means that Supplier C is not allowed to win any contract for now. Bryan smiles and reassures the program manager that based on the evaluation he has already chosen Supplier B as the upcoming supplier. Therefore, the recent findings about C’s vehicles would be irrelevant.
Nonetheless, to keep the records clean, after the call Bryan excludes C from the evaluation matrix. As C is the cheapest supplier, Bryan realises that C’s exclusion actually impacts the cost scores of A and B:
With supplier A now being the cheapest supplier, they receive the full cost score of 40. Accordingly, B’s cost score is 40 x 300/600 = 20. Suddenly, supplier A is better than B, overturning the earlier outcome: it turns out Bryan should not have congratulated B, but A instead. Bryan feels uneasy knowing this will now cause a problem.
Analysis
Let us take a step back to understand what went wrong in this awarding process. While a summary of all the criteria in one decision figure is necessary to find the best supplier, the issue is that Bryan used scores that depend on the relative performance of different suppliers. The absolute difference of scores changed (from 30-15=15 to 40-20=20) when the seemingly ‘irrelevant supplier’ was excluded.
While our example was fictitious, it suffered from a property inherent to the chosen methodology, and thus such cases are not unrealistic and can happen** and should therefore be avoided.
A better way: Bonus/Penalty
Is there an alternative way? We strongly believe so. Our go-to evaluation methodology for Procurement tenders is the so-called “Bonus/Penalty” methodology.
The key idea of Bonus/Penalty is to allow for a robust comparability of all suppliers by assigning a “price tag” to all non-commercial awarding criteria. The sum of those prices together with the commercial business case then provides a figure that represents the total value of ownership (TVO) of each supplier. This figure allows you to compare all suppliers according to that one price, so very fittingly perhaps we call it the “Comparison Price”.
Each Bonus or Penalty figure is derived according to (i) the importance of an awarding criterion and (ii) a supplier’s performance on that criterion. While creating such figures may not sound like an easy task, Bryan had to overcome a similar difficulty in his scoring matrix above. He also had to determine what percentage each criterion should contribute to the awarding decision and he had to evaluate each supplier.
Accordingly, using the Bonus/Penalty evaluation methodology, the evaluation results in our example could look like this:
Note that each Bonus/Penalty figures reflects a “willingness-to-pay” for each criterion. To illustrate this, note that Supplier B receives a total Penalty of £320m (£120m for Criterion 1 and £200m for 2), whereas C receives a total Penalty of £720m (£220m + £500m). This means that Supplier C needs to be more than £400m cheaper for Bryan’s organisation to prefer Supplier C over B due to B’s stronger non-commercial performance.
As we can see, the Comparison Price summarises the cost and the monetary figures for all other criteria. The supplier with the lowest Comparison Price wins as they provide the overall best offer. Note that since the Comparison Price only reflects each supplier’s overall evaluation, the resulting cost that needs to be paid to Supplier B is still £600m.
In this methodology, each supplier is evaluated “on their own merit”, i.e. independent of each other. This means, disqualifying an irrelevant bidder such as C in our example will not have an impact on the scores and ranking of the other bidders.
Also note that we advocate the Bonus/Penalty methodology for many other reasons as well, but it would go beyond the scope of this article to discuss them all.
Conclusion
As simple as the ‘point system’ sounds at first glance – if you want to avoid such strange ‘100 metre races’ as per our illustration at the start, then you’d better use an alternative methodology!
*We have described the story in this way to illustrate our point as easily as possible to the casual reader. If this happened in a public tender, a more realistic scenario would be that Bryan publishes the tender result, supplier A challenges the result and then all hell breaks loose…
**For example, it is quite common for authorities to include clauses on excluding suppliers from the evaluation who hand in an “abnormally low bid”. What constitutes “abnormally low” might be debatable, thus such scenarios have a fair chance of ending up in a legal case.