Home / Guides / How to score higher on FM tenders

How to Score Higher on UK FM Tenders

A practical guide from TenderStandard UK · 10 minute read

Most UK facilities management bids sit at 3 out of 5. Not bad. Not good enough to win.

The meaningful gap is not between 3 and 5. It is between 3 and 4. A 3 response is well-intentioned and generic. A 4 response is the same content, tightened so every claim has a named owner, a number, or a case study behind it. That single-point jump is where most SMEs win or lose the contract.

How UK FM quality scores actually translate

Scoring schemes vary across buyers, but the descriptors below reflect the common pattern across UK public sector FM tenders.

Score What it actually means
5 / 5 Specific, evidenced, contract-tailored, low risk. Evaluator can point to a named role, a named outcome and a named case study for every major claim.
4 / 5 Strong but missing one element, usually evidence (no case study) or tailoring (reads slightly generic). A one-paragraph addition often closes the gap.
3 / 5 Generic, or only partially answers the question. Nothing wrong, nothing memorable. Most bids sit here.
2 / 5 Weak understanding of the requirement, or compliant-looking prose with no substance behind it.
1 / 5 Non-compliant, or fails to engage with the question as asked.
0 / 5 Not attempted or unintelligible.

The seven downgrades evaluators apply most

1. The response does not directly answer the question

A response that talks generally about FM delivery, but does not answer the specific question asked, drops to "partially meets" regardless of how well the prose reads. Evaluators mark against the question, not the essay.

Before"Our approach to facilities management is built on strong leadership, robust processes and a customer-first ethos."
After"You asked how we will mobilise three hospital sites over 12 weeks. Our approach is built around three phases: groundwork, baseline-building and stabilisation. Each phase has a named owner and measurable exit criteria."

2. Nothing in the answer is specific to this contract

If the response could be submitted for a different contract unchanged, it is a templated answer. Evaluators recognise this and score accordingly.

3. No named roles, no measurable outcomes

"A dedicated team" is a phrase used by people who have not done the work. Named roles (Contract Manager, Compliance Lead, Mobilisation Director) signal experience. Measurable outcomes signal seriousness.

4. Claims without evidence

"Industry-leading," "best-in-class," "proven track record," all sound confident. None can be verified without supporting evidence. Unverified superlatives do not score, they are filtered out by experienced evaluators.

5. Commitments that hedge

"We will aim to," "we strive to," "where possible," all read as commitments the writer is unwilling to make. Evaluators want a real commitment or a clear explanation of the constraint.

6. Nothing about risk

A response that reads like nothing could go wrong is a response written by someone who has not run the contract. Two or three realistic risks, with specific mitigations, signals operational experience and raises the score.

7. No contract-specific tailoring of method statements

Even when the methodology is sound, if the method statement is not tied to the specific sites, scopes or user groups in the ITT, it reads as generic.

The 3 to 5 Framework: seven moves

Seven moves, applied in order, that lift a competent response to a strong one. Most responses benefit from all seven. Most responses currently apply three or four.

  1. Answer the question in the first three sentences. The evaluator already knows you do FM. They want to know whether you understand this question.
  2. Mirror the client's wording. If the ITT says "planned preventive maintenance," do not write "scheduled servicing." Unglamorous and it works.
  3. Name the client and the scope explicitly. The client's name should appear more than once, early. The scope (sites, GIA, service types) should be referenced in the opening paragraph.
  4. Name every role. Every commitment, every "we will," has a named role behind it.
  5. Attach a number or a frequency to every action. "Daily client stand-up during the first 90 days." "Monthly compliance report." "PPM compliance target 98%." Numbers signal delivery thinking, not writing.
  6. Include one measurable outcome from a comparable contract. One credible reference to past delivery, with a number and an optional referee, is worth more than three paragraphs of self-description.
  7. Remove every unverifiable claim. Search the draft for "leading," "best-in-class," "unrivalled," "proven." Cut all unless tied to specific evidence.
Applied together. Take one quality response, apply all seven moves end-to-end, and time the improvement. Most teams find the response gets shorter and stronger. Short and specific almost always beats long and generic in UK FM scoring.

A quick pre-submission check

Before red team, run this 1-hour pass. It rarely fails to shift a 3 into a 4:

  • Is the tender question quoted at the top of the response?
  • Is the client named at least twice?
  • Is every "we will" backed by a number, frequency or named outcome?
  • Is every major commitment owned by a named role?
  • Is at least one case study referenced with a measurable result?
  • Has every "industry-leading," "world-class," "best-in-class" been cut?
  • Has the response been mapped against the quoted scoring criteria?

Every framework in this guide, as a working toolkit

The UK FM Tender Toolkit packages the 3 to 5 Framework, the 1-hour improver checklist and a worked mobilisation example into a single editable bundle for UK FM SMEs.

Get the toolkit, £129