Author: Andrea de la Fuente
Artificial intelligence (AI) is significantly transforming multiple sectors of the economy and public administration. In the healthcare domain, its potential impact ranges from improved diagnosis and clinical monitoring to the optimization of administrative processes and the advancement of biomedical research (1,2). This expansion entails a proportional responsibility: the need to generate robust clinical evidence, ensure safety and equity, and establish regulatory frameworks that guarantee compliance with acceptable health and ethical standards (3,4). The balance between innovation and control thus emerges as both a strategic opportunity and a relevant challenge for managers, evaluators, and health policy decision-makers (2,3).
Tangible benefits: areas in which AI can improve outcomes and efficiency
AI offers numerous opportunities for healthcare systems, with the potential to translate into both better health outcomes and a more efficient allocation of healthcare expenditure, particularly when adoption is accompanied by economic evaluation and real-world impact studies (1,2,5). Key areas of contribution include (1,2,5):
- Improved diagnostic accuracy and reduced clinical errors through imaging models and digital pathology.
- Increased operational efficiency through the automation of administrative tasks and prioritization of care resources.
- Strengthened surveillance and early response capacity through the analysis of heterogeneous data streams.
- Democratization of access to specialized clinical care through the deployment of digital tools in settings with a shortage of specialists.
Critical barriers: evidence, equity, and ethics
The integration of AI into the healthcare sector poses multiple challenges of different natures:
- Evidence and validation: a substantial proportion of the literature is based on preclinical or simulation studies that do not equate to clinical validation. It is essential to distinguish between computational performance and real clinical benefit, using appropriate evaluation designs (3,4).
- Infrastructure and capabilities: effective adoption requires interoperable systems, robust data governance, and professionals trained in digital health—factors that remain limiting in many administrations and healthcare centers (2,5).
- Bias and equity risks: models trained on non-representative data may reproduce or amplify existing inequalities, negatively impacting vulnerable populations (2,6).
- Cybersecurity and privacy: large-scale data processing increases the risk of re-identification and cyberattacks, requiring strict technical and contractual controls (2,6).
- Operational ethics: ethical principles applicable to AI in health (accountability, autonomy, non-maleficence, equity, privacy, transparency, and trust) require practical translation into organizational processes and decisions (6). This involves documenting datasets and training processes, ensuring human oversight in critical decisions, and systematically assessing the impacts of algorithmic interventions (4,6).
Regulatory framework in Europe: the AI Act
The European Union has adopted a risk-based regulatory approach through the AI Act, which establishes differentiated obligations depending on the system’s risk level and requires transparency, risk management, and oversight for applications classified as high-risk (2). This framework is complemented by the Medical Devices Regulation and the forthcoming European Health Data Space, which condition both the use of health data and the commercialization of AI-based clinical solutions (2).
From regulation to practice: AESIA guidelines and adaptation to the Spanish context
In Spain, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) is responsible for supervising, regulating, and ensuring the responsible use of artificial intelligence. This agency has published practical guidelines that provide operational tools to adapt the requirements of the AI Act and European regulatory obligations to national practice. These include risk management checklists, transparency criteria, technical documentation requirements, and recommendations for post-market surveillance, among others (7). Their application facilitates the incorporation of technology assessment criteria by hospitals and evaluation agencies, as well as the requirement for clinical evidence proportional to the risk level of the evaluated system (7).
AI in action in Spain: clinical cases and public systems
Multiple experiences have already been developed in Spain that illustrate the practical application of AI across different levels of care (4,8,9). These include:
- Risk prediction and stratification systems integrated into electronic health records to prioritize interventions in patients with chronic conditions (4).
- Image-based early detection tools (diabetic retinopathy, AI-assisted mammography screening) that complement specialist work (4).
- Monitoring solutions using wearables that enable remote follow-up and the generation of early alerts (4,5).
- Hospital pilot projects aimed at optimizing bed management and resource allocation during care demand peaks, combining demographic, epidemiological, and occupancy data for the planning of vaccination campaigns and care points (2,4).
- AI applications in hospital pharmacy focused on analyzing large volumes of data to support clinical decision-making, optimize pharmacotherapeutic processes, and improve patient safety, efficiency, and quality of care (8).
- Primary care initiatives exploring the use of language model–based assistants for consultation summarization and support in clinical report drafting, with the aim of reducing administrative burden (1,4).
- Applications in research and public health that use AI for population segmentation, tailored communication campaign design, and improved epidemiological surveillance through the analysis of unstructured data (2,4,5).
- Developments in clinical research and digital pathology, including models for the analysis of histological samples and the generation of synthetic data that preserve privacy and facilitate multicenter studies (1,4,6).
- Tools that enable natural language queries about human medicines to provide immediate answers based on official package leaflet information, improving accessibility, understanding, and transparency of pharmacological information for the public (9).
Towards responsible healthcare AI: priorities for the coming years
The immediate horizon requires coordinated progress along three priority lines (2,3,7):
- Strengthening clinical evidence through prospective studies and trials that measure outcomes relevant to patients and healthcare systems.
- Developing institutional capacities (data infrastructure, training, and health technology assessment units) to integrate AI into procurement, evaluation, and decision-making processes.
- Harmonizing governance and auditing to ensure equity, transparency, and safety, combining regulation, independent audits, and citizen participation.
AI applied to healthcare offers substantial opportunities to improve outcomes and efficiency. However, its responsible adoption depends on the availability of robust evidence, appropriate data governance, professional training, and risk-adjusted regulatory frameworks. Spain already has practical tools to guide implementation in line with the AI Act, but their real impact will require effective coordination among clinicians, health economists, evaluation agencies, and public decision-makers, with the aim of transforming technological potential into real and equitable benefits for the population (2–4,7).
References
1. Icahn School of Medicine at Mount Sinai. (2023). Frontiers of Medical Research: Artificial Intelligence. Science/AAAS sponsored supplement.
2. Panteli, D., Adib, K., Buttigieg, S., Goiana-da-Silva, F., Ladewig, K., Azzopardi-Muscat, N., Figueras, J., Novillo-Ortiz, D., & McKee, M. (2025). Artificial intelligence in public health: promises, challenges, and an agenda for policy makers and public health institutions. The Lancet. Public health, 10(5), e428–e432. https://doi.org/10.1016/S2468-2667(25)00036-2
3. Kouzy, R., Hong, J. C., & Bitterman, D. S. (2025). One shot at trust: building credible evidence for medical artificial intelligence. The Lancet Digital Health, 7, 100883. https://doi.org/10.1016/j.landig.2025.100883
4. Puchades, R., & Ramos-Ruperto, L.; Grupo de Trabajo de Medicina Digital de la SEMI. (2024). Inteligencia artificial en la práctica clínica: calidad y evidencia. Revista Clínica Española, 225, 23–27. https://doi.org/10.1016/j.rceng.2024.11.001
5. Castaño Castaño, S. (2025). La inteligencia artificial en Salud Pública: oportunidades, retos éticos y perspectivas futuras. Revista Española de Salud Pública, 99, e202503017
6. Ning, Y., Teixayavong, S., Shang, Y., Savulescu, J., et al. (2024). Generative artificial intelligence and ethical considerations in health care: a scoping review and ethics checklist. The Lancet Digital Health, 6, e848–e856. https://doi.org/10.1016/S2589-7500(24)00143-2
7. Agencia Española de Seguridad e Inteligencia Artificial (AESIA). Guías. Gobierno de España. https://aesia.digital.gob.es/es/guias (citado 11 de Febrero 2025).
8. Yared González-Pérez, Alfredo Montero Delgado, Jose Manuel Martinez Sesmero (2024).Acercando la inteligencia artificial a los servicios de farmacia hospitalaria. Farmacia Hospitalaria. Volume 48, Supplement 1, Pages S35-S44. https://doi.org/10.1016/j.farma.2024.02.007
9. Agencia Española de Medicamentos y Productos Sanitarios. (2025). La AEMPS lanza MeQA, una herramienta de IA pionera en la respuesta a preguntas sobre medicamentos de uso humano. AEMPS. https://www.aemps.gob.es/informa/la-aemps-lanza-meqa-una-herramienta-de-ia-pionera-en-la-respuesta-a-preguntas-sobre-medicamentos-de-uso-humano/ (citado 11 de Febrero 2025).
