Evaluation Phase
Introduction
The Evaluation Phase is where Value Engineering shifts from divergence to convergence. After generating a wide array of alternatives in the Creative Phase, the team must now screen, score, and rank ideas to identify those worth developing further. This phase is about discipline: applying transparent criteria, balancing technical and economic considerations, and ensuring that selections align with the project’s essential functions and constraints.
Without structured evaluation, teams risk bias, politics, or subjective preferences driving decisions. Done well, evaluation builds trust, secures stakeholder buy-in, and ensures that the best ideas advance to development.
Purpose of the Evaluation Phase
- Screen and rank ideas: Narrow the pool to a shortlist of viable alternatives.
- Make criteria explicit: Avoid subjective or shifting interpretations.
- Align with functions and constraints: Ensure selections serve the basic function and comply with requirements.
- Build stakeholder confidence: Transparent scoring fosters trust and consensus.
This phase answers: “Which ideas are worth developing, and why?”
Designing Evaluation Criteria
Effective evaluation depends on clear, agreed-upon criteria. Common categories include:
- Feasibility: Technology readiness, supply chain maturity, required skills.
- Performance: Ability to meet or exceed functional targets; margin to constraints.
- Economics: CapEx, OpEx, Net Present Value (NPV), payback period; sensitivity to key variables.
- Risk: Technical, schedule, regulatory; feasibility of mitigation.
- Integration: Compatibility with existing systems, modularity, maintainability.
Weights should be assigned to reflect project priorities (e.g., Performance 30%, Economics 30%, Feasibility 20%, Risk 10%, Integration 10%).
Process Steps (Checklist)
- Pre-screen ideas
Remove concepts that fail hard constraints or non-negotiable requirements. - Define weights
Agree on criteria weights with stakeholders before scoring begins. - Score objectively
Use 1–5 scales with defined anchors (e.g., TRL levels, NPV ranges). - Run sensitivity tests
Vary weights and assumptions to test ranking robustness. - Shortlist alternatives
Select top candidates and include 1–2 “dark horse” ideas to preserve innovation. - Document rationale
Record why ideas advanced or were dropped; link decisions to functional needs.
Tools and Templates
- Weighted decision matrix
- Lifecycle cost calculator (NPV/payback, energy/maintenance profiles)
- Risk register (mitigation plans, residual risk)
- Assumption tracker (evidence links, sensitivity flags)
Deliverables and Acceptance Criteria
- Ranked shortlist: Top 3–5 alternatives with scores and narratives.
- Sensitivity summary: Demonstrates stability of rankings.
- Risk and economics overview: Key numbers and mitigation plans.
Acceptance criteria:
- Criteria and weights agreed and documented.
- Transparent scoring with evidence.
- Stakeholder concurrence on shortlist.
Common Pitfalls
- Weight drift: Changing criteria mid-process to favor a pet idea.
- Single-number bias: Overreliance on NPV without performance context.
- Hidden assumptions: Unstated dependencies skew outcomes.
- No sensitivity analysis: Rankings collapse when assumptions shift.
Example (Mini Case)
A team evaluating heat recovery options scored microchannel exchangers high on performance but flagged supply risk. Sensitivity analysis showed rankings remained stable under energy price fluctuations. Inclusion of a “dark horse” concept—a modular plate exchanger—offered a lower-risk path while preserving 85% of savings, strengthening stakeholder buy-in.
Transition to the Next Phase
With a ranked shortlist, the team proceeds to the Development Phase to build technical and commercial depth—turning alternatives into implementable solutions.CTA: Download the evaluation matrix and lifecycle cost calculator.
