International Society of Medical AI
Budget
Not declared
EP Access
0
accredited persons
Staff
5
1.2 FTE
EU Grants
None
Mission & Goals
We are a doctor-led non-profit promoting the safe, ethical, and clinically governed use of AI in healthcare. Our mission is to ensure that AI enhances patient care while preserving clinical responsibility, medical ethics, and human oversight. We support regulators and policymakers through expert consultation, develop standards and accreditation frameworks for AI readiness in hospitals, and offer structured education for clinicians on AI use. We advocate for evidence-based, transparent, and inclusive governance of AI systems, and serve as a clinical interface between healthcare professionals, developers, and EU institutions. Our work spans regulation, education, policy engagement, and certification, all aimed at protecting patients while fostering responsible innovation.
EU Legislative Interests
Our organisation focuses on recent legislative proposals, strategies, and regulatory frameworks adopted by the European Commission and co-legislators that shape the development, deployment, and oversight of artificial intelligence in healthcare. Our interest representation activities target the following EU initiatives: - Proposal for a Regulation laying down harmonised rules on artificial intelligence – COM(2021) 206 final We follow this Regulation as it sets out obligations for high-risk AI systems, including those used in clinical care. Our interest lies in how it addresses medical safety, post-market monitoring, and human oversight in AI-enabled diagnostics and decision support. - Proposal for a Regulation on the European Health Data Space – COM(2022) 197 final We engage with this proposal due to its relevance for access to and secondary use of electronic health data. Our focus includes safeguarding data use in AI training and ensuring clinician involvement in data governance frameworks. - Regulation (EU) 2017/745 on medical devices and Regulation (EU) 2017/746 on in vitro diagnostic medical devices These frameworks are central to our work on conformity assessment of AI-based clinical software and diagnostics. We monitor their application to software as a medical device, particularly under evolving guidance. - Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence – COM(2022) 496 final We are interested in the impact of this Directive on liability in cases involving AI use in healthcare. We focus on how it defines causality, harm, and burden of proof in clinical contexts. - Proposal for a Directive on liability for defective products – COM(2022) 495 final This proposal affects safety and redress for patients harmed by AI-related failures. We follow its interaction with broader product safety and risk frameworks. - Proposal for a Regulation establishing the Digital Europe Programme (2021–2027) – COM(2018) 434 final We follow this Regulation for its role in funding AI development in healthcare and supporting digital skills acquisition among clinicians and institutions. - Communication from the Commission on a European strategy for data – COM(2020) 66 final We track this strategy to assess its long-term impact on the governance of health data, particularly with respect to transparency, access conditions, and the reuse of sensitive data for algorithm development. - Communication from the Commission on building a European Health Union – COM(2020) 724 final This Communication is relevant to our interest in AI-enabled health preparedness tools, cross-border data exchange, and the integration of digital infrastructure in public health. - European Parliament resolution of 20 October 2020 on a civil liability regime for artificial intelligence (2020/2014(INL)) We monitor the European Parliament’s recommendations for a harmonised framework for civil liability in AI, focusing on proposals that affect clinical accountability and professional standards. - European Parliament legislative resolution of 13 March 2024 on the proposal for a Regulation on artificial intelligence We assess this resolution’s amendments, particularly those clarifying risk classifications for medical applications, mandatory impact assessments, and human control requirements. - Commission Staff Working Document: Liability for Artificial Intelligence and other emerging digital technologies – SWD(2020) 64 final This document informs our understanding of the Commission’s position on risk, causality, and foreseeability in the use of AI in clinical practice. Our activities target the evolution, implementation, and oversight of these files, with the aim of promoting a regulatory environment in the EU that protects patients, supports clinicians, and ensures that healthcare AI systems remain safe, accountable, and ethically governed.
Communication Activities
Our organisation’s communication activities focus on the legislative files listed in this registration, particularly the Proposal for a Regulation laying down harmonised rules on artificial intelligence (COM/2021/206 final), the Proposal for a Regulation on the European Health Data Space (COM/2022/197 final), the AI-related liability directives (COM/2022/495 and COM/2022/496), and Communications COM/2020/66 and COM/2020/724. Our aim is to contribute to the understanding and implementation of these legislative acts from a clinical and ethical perspective, with a focus on patient safety and professional responsibility. In connection with these files, we have prepared policy-facing materials intended to support structured dialogue with EU institutions. These include briefing notes analysing implications of the AI Act and EHDS for clinical oversight, liability, conformity assessment, and secondary data use. They are based on clinician experience, legal commentary, and alignment with EU regulatory frameworks. These materials are intended for sharing with relevant Commission services, Members of the European Parliament, and national experts. We are preparing a communication series titled “AI in European Healthcare: Safe, Ethical, Accountable”. This will include a short explainer on deployer obligations under the AI Act, a briefing on EHDS-GDPR interactions in secondary use of health data, and a commentary on emerging liability rules. These will be published online and shared with stakeholders. We are also organising a recurring thematic roundtable series beginning in late 2024. These events will address clinical implementation challenges arising from the AI Act, EHDS, and liability frameworks. Topics will include high-risk system classification, CE-marking processes, and ethical governance of health data flows. Summary notes of each roundtable will be compiled and shared with invited stakeholders. Additional communication materials in preparation include: - “Clinician-Led Oversight in the AI Act: Interpretation and Implementation” - “EHDS and Secondary Data Use: A Clinical Risk Perspective” - “AI Liability and Professional Responsibility in Cross-Border Care” These materials will be disseminated with clear references to the relevant EU legislative texts and in a format appropriate for institutional use. We have not conducted any media campaigns, financial sponsorships, surveys, or polling targeting EU staff. We have not held physical events on Parliament premises. Our communication activities are conducted independently, without reference to or use of institutional logos or branding, and are fully aligned with the standards of the Transparency Register. All activities described relate specifically to the legislative proposals listed and fall within the scope of structured interest representation directed toward EU institutions.
Interests Represented
Promotes their own interests or the collective interests of their members
Member Of
Our organisation is an independent, non-profit professional society established to represent clinical, ethical, and scientific perspectives on the development and regulation of artificial intelligence in healthcare. While not affiliated with or funded by any corporate group, we operate through a network of independent clinicians, legal scholars, and academic advisors who collaborate on the design of responsible AI frameworks within the European context. We are not currently members of any trade federation, political group, or lobbying association. However, we maintain informal working-level contacts with relevant actors across academia, civil society, and standardisation communities. Where appropriate, we cooperate with national and European stakeholders engaged in public consultations or policy advisory processes. These include clinical and digital health working groups convened by independent think tanks, academic networks, and policy forums, including those monitoring implementation of the Artificial Intelligence Act, European Health Data Space Regulation, and related liability frameworks. We support the principle of inclusive, evidence-based dialogue and may from time to time associate with initiatives convened by non-commercial organisations that align with our remit. These engagements are technical or ethical in nature and do not involve financial sponsorship, institutional control, or campaign coordination. We do not currently contribute to any joint declarations or position alliances with external registered entities. Any future participation in such platforms will be disclosed in accordance with the Transparency Register rules. We are not part of any umbrella organisation, platform, or parent body. Nor do we contribute to or sponsor any external lobbying operation, public affairs firm, or interest representation mechanism operated by other entities. Our external contributions are limited to peer-reviewed publications, public lectures, and clinical education materials authored independently and offered freely in the public interest. Where applicable, we may cooperate in the future with standardisation bodies recognised under Regulation (EU) No 1025/2012 or participate in working groups coordinated by European research programmes or ethical advisory bodies. At the time of this registration, we do not hold membership status in any of these structures.
Organisation Members
At the time of registration, our organisation does not maintain a publicly accessible directory of members, as we are in the early phases of formal institutional development. Membership has been extended on an individual basis to a selected group of qualified professionals, including medical practitioners, academic researchers, clinical ethicists, and legal experts, all of whom have demonstrated a sustained interest in the ethical, safe, and professionally governed implementation of artificial intelligence in healthcare. These individuals participate in a voluntary and non-remunerated capacity and are not acting on behalf of any commercial entity or institution. All interest representation activities are conducted under the sole authority of the registered organisation and are fully independent of any external commercial or political influence. At present, we do not represent any legal persons, industry groups, or commercial associations. Our structure does not include corporate or institutional membership classes. We also do not have national chapters, regional sections, or affiliated umbrella bodies operating under a federated model. All activities reported here are carried out under the direct oversight of the founding board and in accordance with the organisation’s constitutional framework. Should our organisation expand to include formal membership categories, affiliate institutions, or regional divisions, a full and updated list will be made available via our website and disclosed here in line with Transparency Register guidelines. If and when such a membership list exists, we will provide a direct hyperlink to a publicly accessible webpage detailing the names, categories (individual/legal entity), and roles of all current members and associated structures. We are committed to full transparency regarding any future expansion of our membership base or formal association with other entities. All relevant relationships, including any that may have implications for interest representation or coordinated policy activities, will be disclosed promptly.
Additional Information
The organisation is newly formed and has not closed a financial year. All activities to date have been carried out by volunteers. No EU grants have been received and no intermediaries have been used. The estimated cost reflects in-kind time, basic operational tools, and early-stage communications.
Commissioner Meetings
No recorded meetings with EU commissioners.