RFx Scoring.
Consistent, fast, defensible.
Leah ingests every supplier response, scores against your weighted criteria, normalizes across evaluators, surfaces compliance gaps, and drafts a defensible recommendation memo with full audit trail.
Scoring supplier responses still runs on spreadsheets and goodwill.
Scoring spreadsheets re-built every event
Every RFx kicks off the same ritual. A new comparison workbook, hand-built from the last one, with formulas that break the moment a supplier answers in a different format. Institutional memory walks out the door with each sourcing analyst.
Missing terms surface late
Suppliers leave critical clauses blank, attach side letters, or quietly redline standard terms. The team catches it during legal review weeks later, after shortlists are already shared with stakeholders.
Evaluator bias drifts
One evaluator scores generously, another harshly. Anchoring effects pull scores toward the first response read. Without normalization, the highest-scored supplier often reflects who reviewed them, not who is strongest.
Comparison tables inconsistent
Suppliers respond in their own templates, units, and structures. Building an apples-to-apples comparison eats days of analyst time, and inconsistencies still slip through into stakeholder reviews.
Negotiation leverage missed
Pricing outliers, capability gaps, and term concessions across the response set are the raw material for the next round of negotiation. Buried in spreadsheets, that leverage never makes it into the negotiation playbook.
Audit trail fragmented
Scores live in one workbook, comments in email, weighting decisions in a presentation deck, the final memo in a Word document. When procurement defends an award decision, the trail has to be reconstructed by hand.
Every response, normalized into one comparable structure
Leah ingests supplier responses in whatever format they arrive. Filled templates, freeform PDFs, spreadsheets, side letters, attachments. She extracts every answer, maps it to the original RFx question, normalizes units and currencies, and structures the full response set as comparable data the moment the deadline closes.
“We had suppliers responding in seven different templates. Leah turned it into one comparable dataset before our analyst even opened the responses.”
Sourcing Analyst, Industrial Manufacturer
Five steps from response deadline to defensible award
Leah operates on top of your existing sourcing platform. No rip and replace. Value from the first scored event.
Receive Responses
Suppliers submit responses through your sourcing platform, email, or vendor portal. Leah ingests them in any format and acknowledges receipt against the participation list.
Normalize
Every answer is mapped back to the originating RFx question. Pricing, units, and lead times are converted to a common basis. Side letters and attachments are linked to the relevant clauses.
Apply Scoring
Your weighted rubric is applied consistently across responses. Multiple evaluators score in parallel and Leah normalizes for evaluator drift before producing the composite.
Identify Gaps
Mandatory terms, certifications, and compliance requirements are checked against every response. Missing, non-conforming, and exception items are flagged for resolution.
Generate Recommendation
A defensible recommendation memo is drafted from the scored data, with negotiation leverage extracted and a complete audit trail behind every conclusion.
Got Questions? Get Answers.
Sourcing platforms collect responses and host the scoring spreadsheet. They do not read the responses, normalize across formats, surface compliance gaps, or draft the recommendation memo. Leah operates as the intelligence layer on top of your sourcing platform. She reads every response, applies your rubric consistently, normalizes evaluator drift, flags compliance gaps with the source passage, and produces the recommendation memo with full audit trail.
No. Leah operates on top of your existing sourcing platform, ERP, and contract repository. Responses continue to be received where they are received today. Leah ingests them, runs scoring and gap analysis, and writes recommendations and audit events back into your system of record. There is no rip-and-replace.
Leah parses responses regardless of format. PDFs, Word documents, supplier templates, freeform attachments, and side letters are all extracted and mapped back to the originating RFx question. Where confidence is below threshold, the question is flagged for analyst review before scoring. Most format variance is handled without human intervention.
Each evaluator scores in their own pane against the rubric. Leah tracks scoring patterns over time and normalizes for harshness, leniency, and anchoring drift before producing the composite score. Where evaluators disagree significantly on a given question, Leah flags it for calibration before the scoring window closes, so disagreements are resolved on evidence rather than averaged away.
Yes. Your weighted rubrics are configured per category, per event, or per business unit. Leah applies the rubric you define. There is no forced model. As your scoring approach evolves, the rubric is updated centrally rather than rebuilt in every workbook.
Supplier responses, scoring data, and recommendation memos are encrypted in transit and at rest. Customer data does not train Leah's underlying models. SOC 2 Type II, GDPR, CCPA, and ISO 27001 aligned. Private instance deployment is available for customers with strict data isolation requirements. Audit events are immutable and timestamped.



















































