Clinical validation of artificial intelligence (AI) solutions for treatment and care

Inicio / Programas UE / HORIZON / HORIZON-HLTH-2021-DISEASE-04-04
Logo

(HORIZON-HLTH-2021-DISEASE-04-04) - CLINICAL VALIDATION OF ARTIFICIAL INTELLIGENCE (AI) SOLUTIONS FOR TREATMENT AND CARE

Programme: Horizon Europe Framework Programme (HORIZON)
Call: Tackling diseases (2021) EU

Topic description

ExpectedOutcome:

This topic aims at supporting activities that are enabling or contributing to one or several expected impacts of destination 3 “Tackling diseases and reducing disease burden”. To that end, proposals under this topic should aim for delivering results that are directed, tailored towards and contributing to all of the following expected outcomes:

  • Health care professionals employ safer and evidence-based clinical decision support systems for affordable treatment, including home-based care.
  • Health care professionals better predict patients’ (long-term) response, including adverse side effects of a specific personalised treatment.
  • Patients and carers have access to disease-specific communication packages informing about a disease and the proposed treatment.
  • Clinical guidelines are enhanced thanks to novel, clinically validated and (cost-) effective AI solutions.
Scope:

Applying trustworthy-AI[1] in healthcare contexts generate a multitude of benefits, including more effective disease management by optimised personalised treatments and assessment of health outcomes.

Based on existing (pre)clinical evidence, proposals should focus on implementing clinical studies to validate AI-based solutions comparing their benefits versus standard-of-care treatments in non-communicable diseases. Proposals should pay special attention to the usability, performance and safety of the AI-based solutions developed, and above all to their clinical evaluation and (cost-)effectiveness in view of their inclusion into current clinical guidelines for personalised treatments following current EU regulatory framework.

Proposals should address all of the following:

  • Supporting the clinical development, testing and validation of AI-assisted treatment and care options, hereby assisting in clinical decision-making;
  • Timely end-user inclusion (e.g. patient, caregiver and health care professional) along the clinical development of the AI-based solutions and the clinical validation process, considering the potential of social innovation approaches to support inclusion and dialogue between patients, carers and health care professionals;
  • Enhancing accurate prognosis for and response to a specific personalised treatment, hereby providing a solid risk assessment (e.g. potential adverse events, side effects, expected treatment compliance and adherence over the time compared to standard care);
  • Inclusion of sex and gender aspects, age, socio-economic, lifestyle and behavioural factors and other social determinants of health, as soon as possible considering also early stages/phases of development;
  • Assessing potential manual or automated biases for large uptake;
  • Integration of an extensive information and communication package about AI-assisted treatment options;, highlighting their relevance for the patients and carers;
  • Measuring the (cost-)effectiveness of AI-assisted development of therapeutic strategies and its implementation in clinical practice.

Proposals should describe a pathway for establishing standard operating procedures for the integration of AI in health care (e.g. for supporting clinical decision-making in treatment and care). Proposals are encouraged to consider multidisciplinary approaches and allow for intersectoral representation. Proposals have to ensure that resulting data comply with the FAIR[2] principles and data generated by the AI-based solutions are in line with established international standards.

Integration of ethics and health humanities perspectives are essential to ensure an ethical approach to the development of robust, fair and trustworthy AI solutions in health care, taking into account underrepresented patient populations. In relation to the use and interpretation of data, special attention should be paid to systematic discrimination or bias (e.g. due to gender or ethnicity) when developing and using AI solutions. Proposals should also focus on traceability, transparency, and auditability of AI algorithms in health. The international perspective should be taken into account, preferably through international collaboration, to ensure the comprehensiveness, interoperability and transferability of the developed solutions.

Where relevant, applicants are highly encouraged to deliver a plan for the regulatory acceptability of their technologies and to interact at an early stage with the relevant regulatory bodies. SME(s) participation is encouraged.

Cross-cutting Priorities:

EOSC and FAIR data
Social Innovation
International Cooperation
Socio-economic science and humanities

[1]High Level Group on Artificial Intelligence, set up by the European Commission, Ethics Guidelines for Trustworthy AI, document made public on 8 April 2019.

[2]FAIR data are data, which meet principles of findability, accessibility, interoperability, and reusability.

Keywords

Social Innovation Health policy and services International Cooperation Social sciences and humanities EOSC and FAIR data Artificial intelligence, intelligent systems, mult Computer sciences, information science and bioinfo

Tags

therapy and care AI clinical studies decision support non-communicable diseases

¿No encuentras la financiación que necesitas?

Contacta con nosotros y cuentanos cuál es tu proyecto.