Addressing Uncertainty Within EU HTA Regulation

Written by Emanuele Arcà & Sonja Kroep on Wednesday 3rd April 2024

A recurring topic in all EU health technology assessment (HTA) discussions is the need for understanding and addressing uncertainty. Stakeholders want to understand which disease areas and/or health innovations are most affected by uncertainty, but most importantly, when and how to combat that uncertainty by using real-world evidence (RWE), particularly in the context of treatment comparisons.

Guidelines on Direct and Indirect Comparisons 

On March 8, 2024, the Member State Coordination Group on Health Technology Assessment officially adopted both the Practical Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons, and the Methodological Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons. While the 2024 guidelines closely resemble the EUnetHTA 2022 guidelines for direct and indirect comparisons, they have now been endorsed as the official standard by the entire coordination group, solidifying their status as the authoritative voice for quantitative evidence synthesis within the EU HTA process.

These practical guidelines provide detailed instructions for handling evidence syntheses in JCA (Joint Clinical Assessments) reports, specifically focusing on direct and indirect treatment comparisons submitted by health technology developers (HTDs). It outlines requirements for reporting these syntheses in JCA reports but does not dictate when or if a particular synthesis method should be used, as this depends on PICO questions and available evidence. The guideline aims to help assessors identify and address bias and uncertainty, acknowledging that decisions may vary among Member States (MSs). It emphasizes the importance of HTDs providing adequate evidence and supporting information for assessors to evaluate treatment effectiveness, bias, and uncertainty. Despite uncertainties, methods of evidence synthesis may be necessary, and the guideline aims to guide assessors in mitigating potential issues. These are some of the major recommendations listed in the guideline:

  • Randomized controlled trials (RCTs) are preferred for evidence synthesis if well-designed and with low bias.
  • Exchangeability is fundamental for all evidence synthesis methods, ensuring similarity and homogeneity of trial data.
  • Subgroup analyses or (network) meta-regression may be used to address heterogeneity but might not eliminate it entirely.
  • Options for evidence synthesis vary from fixed-effects to random-effects models, with frequentist or Bayesian approaches offering diverse analytical perspectives.
  • Utilizing individual patient data enhances statistical precision, while frequentist methods like inverse variance and Mantel-Haenszel aid in direct comparisons.
  • For comprehensive analysis, the Knapp-Hartung method is recommended for random-effects meta-analyses with ample study data.
  • While indirect comparisons pose greater uncertainties, anchored comparisons respecting randomization remain preferable when direct comparisons are unavailable.
  • Addressing heterogeneity through subgroup analyses or meta-regression is vital, yet sensitivity analyses are crucial, especially in non-randomized data scenarios.

In summary, methodology choice depends on various factors and should align with available data and underlying assumptions. Careful consideration of assumptions and expert statistical input is advised for unbiased inference.

Addressing Uncertainty

The published guidelines align with standard practice and with other guidelines published on the subject, and they are easily applicable when a robust evidence base is available (e.g., RCTs for comparator treatments). However, this will not always be the case in all indications, particularly in the context of advanced therapy medicinal products (ATMPs) and rare diseases. Two potential issues with the current guidelines become apparent when one looks at addressing uncertainty in the evidence base. Firstly, when conducting an unanchored population-adjusted treatment comparison, the guidelines mandate the availability of individual patient-level data; however, this requirement is often unfulfilled. Secondly, especially in the context of ATMPs and rare diseases, it's common to encounter a lack of comparator data stemming from single-arm studies or observational studies. This is where RWE comes into play.

Using RWE becomes imperative when estimating the efficacy and safety of the intervention compared with other treatments or standard of care. Population-adjusted methods can be employed through RWE. Yet, emphasizing the quality of RWE and its careful use is paramount in guaranteeing fair treatment comparisons. This principle extends to the establishment of an RWE-based comparator arm. However, this complexity becomes more apparent when we consider the application of multiple PICOs for JCA submissions. While the exact implementation remains unclear, it is likely that multiple evidence bases will be required for each PICO, as stipulated in the current guidelines, as each PICO demands a separate evidence synthesis. How can we ensure consistent conclusions for each PICO and streamline the JCA report, integrating a wealth of evidence, network meta-analyses, and indirect treatment comparisons? 

An essential component in this process is the introduction of an uncertainty acceptability metric, allowing stakeholders and the review group to assess evidential robustness at a glance.

For quantitative evidence synthesis, evaluating the sources of evidence provides an indication of robustness. However, when no RCT evidence is available, addressing uncertainty becomes more challenging. The introduction of quality metrics for the inclusion of RWE in quantitative evidence synthesis and assessing a treatment's impact on the indication can be highly valuable. For example, ATMPs may face challenges in establishing a strong evidence base. Nonetheless, the risk of not approving a treatment due to a scarcity of evidence, particularly when compared with more established oncology treatments, could overshadow concerns about the evidence base's uncertainty. 

The evidence uncertainty metric should therefore take the type of indirect comparison, evidence base, RWE, indication, and the prospective additional value of the intervention into account.

In light of the elements discussed, OPEN Health is embarking on a comprehensive exploration of uncertainties within the EU HTA landscape. This involves a meticulous mapping exercise to identify and understand factors contributing to uncertainty in health technology assessments. The objective is to develop and validate an uncertainty metric to inform evidence generation around multiple PICOs. OPEN Health actively engages with key stakeholders to glean insights and perceptions regarding existing uncertainties, ensuring a well-rounded understanding that informs strategic decision-making.

Working in partnership with our clients, we embrace our different perspectives and strengths to deliver fresh thinking and solutions that make a difference.

Together we can unlock possibilities.

For information about OPEN Health’s services and how we could support you, please get in touch.