Skip to main content

Author: Roberto Saldaña | Director Director of Innovation at EUPATI Spain

A month ago, I noted that the methodology to capture the patient voice in HTA exists, that the technology exists, and that what is missing is for more stakeholders to join in generating this evidence. Since then, at EUPATI Spain, we have taken a further step. I would like to share what we have learned.

The problem that remains unresolved
The EU HTA Regulation requires integrating the patient perspective and assessing the social impact of health technologies. However, agencies themselves acknowledge that they lack methodologies to do this rigorously. Qualitative evidence is often perceived as anecdotal. A recent review of dossiers submitted to NICE and CADTH showed that only a small fraction included documented qualitative analysis methods.

The result is a gap: the regulation calls for patient input (or experience), but no one knows how to give it enough formal structure for an evaluator to use it in decision-making.

Two worlds that do not speak to each other
In searching for solutions, we found that two communities are working in parallel without intersecting.

On one hand, the causal inference community applies Judea Pearl’s methods (causal graphs, counterfactual questions, the do-operator) to quantitative data to evaluate the comparative effectiveness of treatments. A recent article in Lancet Regional Health Europe advocates integrating these methods into European HTA. Another by Kühne et al. in GMS explores synergies between causal inference and health decision science.

On the other hand, the patient-based evidence community has long argued that qualitative research should carry more weight in HTA. Organizations such as EPF and EURORDIS have promoted frameworks to integrate the patient perspective, and the EMA’s Reflection Paper on patient experience data recognizes its value.

But one world works with electronic records and statistical models. The other works with interviews and thematic analysis. They do not speak to each other.

What we have tested: a bridge between the two
In IMPACTA HTA, we have applied Pearl’s causal inference framework to qualitative data from semi-structured interviews with real patients, in an ongoing study on a rare disease in Spain.

The idea is simple, although we have not found published precedents for this combination: interviews tell you what happens to patients. A causal graph formalizes why it happens and allows you to ask what would change if interventions were made. These are two tools for two different questions, applied to the same phenomenon. What had not been attempted—at least in the literature we reviewed—is using both simultaneously, feeding the second with data from the first.

In practice, this means building a causal model where each relationship (each arrow in the graph) is supported by recurring patterns in interviews—not isolated testimonies, but findings that appear consistently throughout the study. And where each absent arrow is an equally important statement: we assume that relationship does not exist. The result is a formal model that contains the patient voice within it, not alongside it.

What it produces
The regulation calls for integrating social value. But social value has layers, and each requires a different treatment:

First, qualitative social value: what patients experience that clinical data do not capture—stigma, progressive isolation, work impact, reproductive decisions shaped by disease. Interviews document this rigorously, but conventional qualitative methods stop there.

Second, quantifiable social value: turning those findings into formally structured questions. “Patients describe stigma” is qualitative. “If you slow disease progression, stigma stops worsening” is a causal question—with direction, identified variables, and the possibility of estimating magnitude. The causal graph is the bridge between the two.

Third, the patient perspective integrated into evaluation—not attached as an appendix. Each node in the model is linked to testimonies. Lived experience and causal structure become the same argument expressed in two languages. This is not a patient report appended at the end of an HTA dossier; it is a framework that connects lived experience with the causal question the evaluator needs to answer.

From the same model, we have derived counterfactual questions tailored to different stakeholders: the HTA evaluator, hospital manager, social security system, clinician, and regional decision-maker. Each question is formally derived from the graph and becomes quantifiable as the sample grows. All are anchored in verifiable healthcare system costs from official and published sources.

What we have learned
Three unexpected insights:

First: when patients speak, they are already engaging in counterfactual reasoning. When a patient says, “I’d be satisfied just stopping progression,” they are essentially formulating what in Pearl’s notation would be a question about treatment effect on progression. The experience does not need to be translated into the model—it already contains the model. It only needs to be formalized.

Second: the causal framework forces explicit declaration of limitations. Every assumption is visible. Every potential bias is identified within the graph itself. This is exactly what an HTA evaluator needs to trust evidence—not that it is perfect, but that it is transparent about what it is not.

Third: a single model generates multiple questions for multiple decision-makers. It is not a one-off exercise.

It is a generative tool that produces more value as more data are added.

What comes next
We are preparing a methodological publication and expanding the sample of the ongoing study. We have also incorporated into the analysis the framework by Mueller and Pearl (2025) on the definition of “harm” in personalized medicine, published in the American Journal of Epidemiology. It argues that the key question is not only whether a treatment works on average, but who benefits and who may be harmed—and that observational data contain valuable information to answer this.

The methodology exists. We have applied it to a first real case. And we believe it may be the bridge that the new regulation needs between what patients experience and what evaluators decide.

We will continue sharing what we learn along the way.

References
1. Regulation (EU) 2021/2282 on health technology assessment.
2. European Medicines Agency. Reflection paper on patient experience data (EMA/CHMP/PRAC/148869/2025).
3. Pearl, J. (2009). Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge University Press.
4. Mueller, S. & Pearl, J. (2025). The meaning of “harm” in personalized medicine. Am J Epidemiol, 194(6), 1749–1751.
5. Lancet Regional Health Europe (April 2025). Strengthening HTA by integrating causal inference and target trial emulation.
6. Kühne, F. et al. (2022). Causal evidence in health decision making. GMS, 20.
7. EUPATI Spain Conference, European Commission headquarters, Madrid, January 2026.