The Promise and the Paradox of Wire-Free Physiology
The landscape of interventional cardiology has been fundamentally reshaped by the shift from anatomical to functional assessment of coronary artery disease. Fractional flow reserve (FFR) has long stood as the gold standard, supported by robust data from the FAME trials demonstrating that physiology-guided revascularization improves patient outcomes compared to angiography alone. However, the adoption of FFR in routine practice remains hampered by the costs of pressure wires, the need for hyperemic agents like adenosine, and the inherent risks of crossing stenoses with a physical probe.
Quantitative Flow Ratio (QFR) emerged as a compelling solution to these barriers. By utilizing three-dimensional vessel reconstruction and fluid dynamics equations derived from standard angiographic images, QFR offers a computational estimation of FFR without the need for wires or drugs. Initial studies, particularly the FAVOR III China trial, showed that QFR-guided strategies were superior to angiography-guided ones. However, the more recent FAVOR III Europe trial introduced a significant clinical paradox: QFR-guided revascularization failed to show non-inferiority or superiority in certain clinical contexts when compared to traditional FFR. This discrepancy led to a critical question within the scientific community: was the technology itself flawed, or was the implementation during the procedure the weak link? The REPEAT-QFR study was designed to address this pivotal uncertainty.
The REPEAT-QFR Study Design and Objectives
To understand the variability observed in the FAVOR III Europe trial, researchers conducted the REPEAT-QFR study, a comprehensive quality assessment and repeatability analysis. The study focused on 1,008 patients who had been randomized to the QFR arm of the original trial. The primary objective was twofold: first, to compare the QFR values calculated by clinicians in the heat of the procedure (in-procedure QFR) with those calculated retrospectively by highly trained experts in a centralized core laboratory (core laboratory QFR). Second, the study aimed to evaluate the quality of the in-procedure analyses by grading adherence to the standard operating procedure (SOP).
The methodology involved two blinded observers in the core laboratory who re-analyzed 1,233 vessels. These observers were unaware of the initial in-procedure results. Quality was assessed on a 5-point scale, ranging from 1 (very poor) to 5 (very good), based on specific technical criteria such as the selection of appropriate angiographic views, the accuracy of vessel contouring, and the correct identification of the distal landmark for pressure estimation.
Key Findings: A Disconnect Between Bedside and Core Lab
The results of the REPEAT-QFR study highlight a significant gap between controlled laboratory analysis and real-world clinical application. Of the 1,233 vessels analyzed in-procedure, 96.6% (1,191 vessels) were successfully re-analyzed by the core laboratory, indicating that the vast majority of images were technically sufficient for the software to function. However, the numerical agreement between the two sets of measurements was only modest.
The median in-procedure QFR was 0.81, while the core laboratory QFR was 0.84. While a mean difference of 0.02 might seem negligible at first glance, the 95% limits of agreement were surprisingly wide, ranging from -0.26 to 0.29. This implies that for any given vessel, the in-procedure calculation could deviate from the expert core lab calculation by nearly 0.30 units in either direction—a massive margin in a field where the diagnostic threshold for intervention is often a slim margin around 0.80. The Spearman’s rank correlation coefficient was 0.58, and the overall diagnostic agreement (whether the lesion was classified as ischemic or non-ischemic) was only 72%.
Evaluating Quality and Adherence
A critical component of the study was the quality scoring of the in-procedure analyses. The researchers found that 19% of the analyses demonstrated ‘very good’ adherence to the SOP, 45% were ‘good’, and 28% were ‘acceptable’. However, 8% were rated as ‘poor’ or ‘very poor’. While the majority of clinicians followed the protocols reasonably well, the cumulative effect of minor deviations—such as suboptimal frame rate selection or imprecise contouring—contributed to the observed variability.
The study identified several independent predictors of increased variability between in-procedure and core lab results. These included:
1. Suboptimal Angiographic Quality
Images with poor contrast opacification or significant vessel overlap made accurate 3D reconstruction difficult for the software, leading to divergent results.
2. In-Procedure Analysis Quality
Lower adherence to the SOP directly correlated with higher measurement error. This underscores the ‘human factor’ in computational physiology; the software is only as accurate as the data and parameters provided by the operator.
3. Disease Complexity (High SYNTAX Score)
Patients with more complex coronary anatomy, as measured by a high SYNTAX score, presented greater challenges for QFR. Diffuse disease and tortuous vessels complicate the mathematical modeling of flow.
4. Clinical Factors
The presence of diabetes was also associated with higher variability, potentially due to the different microvascular profiles and vessel remodeling patterns found in diabetic patients, which may influence the assumptions used in QFR algorithms.
Expert Commentary: Implications for Clinical Practice
The findings of REPEAT-QFR provide a sobering look at the challenges of transitioning sophisticated computational tools from the lab to the catheterization suite. The ‘modest’ agreement observed suggests that the disappointing results of the FAVOR III Europe trial may not be due to a failure of the QFR technology itself, but rather to the inherent difficulty of performing precise physiological modeling during a live procedure.
Clinical experts suggest that these results should serve as a call for standardized training and more rigorous quality control. Unlike FFR, which provides a direct physical measurement, QFR is a derivation that relies on the operator’s ability to ‘curate’ the input data. If the input is flawed, the output will inevitably be unreliable. There is a growing consensus that the next generation of QFR software should incorporate more automated features, such as AI-driven vessel contouring and real-time quality feedback, to minimize the impact of human error.
Conclusion: Refining the Path Forward
In conclusion, the REPEAT-QFR study demonstrates that while QFR is a feasible and guideline-recommended tool, its accuracy in routine clinical practice is highly dependent on the quality of image acquisition and the technical proficiency of the operator. The significant variability found between in-procedure and core lab analyses highlights the need for a more disciplined approach to wire-free physiological assessment. For QFR to truly challenge FFR as a primary diagnostic tool, the interventional community must focus on narrowing the gap between ‘ideal’ analysis and ‘real-world’ application through better education, stricter adherence to protocols, and technological refinement.
Funding and ClinicalTrials.gov
The FAVOR III Europe trial and the REPEAT-QFR substudy were supported by grants from the Danish Heart Foundation, the Aarhus University Research Foundation, and various institutional research funds. The trial is registered at ClinicalTrials.gov with the identifier NCT03729739.
References
1. Kristensen SK, Holm MB, Maillard L, et al. Repeatability and quality assessment of QFR in the FAVOR III Europe trial: the REPEAT-QFR study. EuroIntervention. 2026;22(1):e53-e65. doi:10.4244/EIJ-D-25-00668.
2. Xu B, Tu S, Song L, et al. Angiographic quantitative flow ratio-guided coronary intervention (FAVOR III China): a multicentre, randomised, sham-controlled trial. Lancet. 2021;398(10317):2149-2159.
3. Fearon WF, Zimmermann FM, De Bruyne B, et al. Fractional Flow Reserve-Guided PCI as Compared with Coronary-Artery Bypass Surgery. N Engl J Med. 2022;386(2):128-137.

