The NZTA’s Procurement manual, together with various guides available from MBIE, is now firmly established as the benchmark for local government procurement. Many councils have opted to use the NZTA’s procedures for procurement in areas that have no relation to the NZTA-funded transport projects (where the procedures are mandatory).
A series of workshops for qualified evaluators has highlighted the need for transport evaluators to be involved in putting together the RFx document, as well as ensuring the evaluation procedures are followed carefully. The workshops also provided a welcome forum for experienced evaluators to discuss and share ways that tendering can be improved.
Confused responses take more time to evaluate
Tender evaluators are frustrated by responses that make it hard to differentiate between bidders.
Too often, tender responses are badly structured, questions are not answered, or information is not covered in the right sequence. Responses are often cut and pasted from previous bids, so the information received is not tailored to the requirements (and, if not carefully checked, it may carry forward irrelevant material or mistakes).
This confusion makes it extremely difficult for the Tender Evaluation Team (TET) to find the information they need to score the bid responses.
As a result, scores for competing bids may vary widely. A lot of time is spent in reconciling the different opinions of TET members, to come up with final agreed scores. Although the score framework within the NZTA Procurement Manual helps, deciding what constitutes a ‘good’ response can be quite subjective.
Mistakes in price schedules are also an issue. While these are the responsibility of the bidders, and the process to ‘confirm or withdraw’ is clearly defined, they still add to the workload of the evaluators. Ultimately, nobody benefits from these errors.
These issues – along with feedback from Contractors that they wanted a simpler process for responding to tenders – prompted one Council to commission Clever Buying to develop a new approach to RFTs. The RFT tools have had spectacular success in cutting down time and hassles in tender evaluation.
The approach is best suited for smaller, relatively straightforward projects – and the benefits have been spectacular. Here’s how it works.
Response Templates provide a consistent document layout for evaluators to compare responses. Thus, all the answers are directly below the questions, and they’re all presented in a similar format.
Not only that, but tenderers find them far less confusing, with less opportunity for the evaluation criteria and the questions to be mis-aligned. There are clear instructions on what information is required, resulting in faster, fairer and more consistent scoring of the responses.
“The differences between the old evaluation process and using the new tools were extraordinary” reported a transport evaluator. “Having the responses in similar format and sequence meant my job to compare and score the responses was halved. I no longer had to search for the information I needed, and the responses were a better match to the questions asked.”
‘Anchored’ reference benchmarks for scoring
Some other valuable tools to cut evaluation time include a series of ‘anchored scales’ for scoring. These tightly define what type of response scores the marks within a particular range for each of the Non Price Attribute categories.
The result? A clear, agreed guide for all TET members on the range of scores that they should allocate for each type of response. Armed with this guide, it’s much easier and faster for evaluators to score the responses consistently. If developed before the responses are received (i.e. it’s not yet known who will tender), Anchored Scales also provide very significant defense and protection against any potential or perceived conflicts of interest. Because they are fact-based, it is difficult to introduce any personal bias for or against a tenderer.
“For the first time, we had little to talk about when we brought our scores together” explains one experienced evaluator. “Naturally, we don’t always agree on everything, but in general, our differences are easily resolved by the scales. The whole process now takes a fraction of the time it used to, with far fewer arguments about the relative merits of the responses.”
Price schedules made easy
A third recommendation is the use of locked pricing schedules. Similar to the questionnaire templates, these allow the bidders only to enter their rates against scheduled items in certain cells, with space for some clarification where needed. They can’t mess with the spreadsheet items.
The totals are self calculating; and – to eliminate confusion – the contract specifications for each item are entered as comments embedded in the schedule, so respondents are reminded of the specifications when they enter their rate for each item.
That way, the schedules are presented uniformly in Excel, and there are never any errors in calculations. The comparison between bidders is made much easier because the spreadsheets can be unlocked and aggregated, to show clearly where the differences in pricing lie.
These schedules were far easier for bidders to use, and resulted in fewer mistakes.
What are the savings?
This approach brought major savings. The estimated tender evaluation time reduced to one-third of what it was previously. Although some time was spent in designing the right questions, this should arguably be part of any robust procurement process.
For most procurement specialists, cutting even 30% off the evaluation time using this approach would be well worth the relatively small investment needed to set it up.
To find out more about the tools described in this article, contact email@example.com or phone us on 0800 225 005.