1 Introduction
The digitalization process today has become a pervasive part of all sectors of the economy and society, with the health sector being no exception. The number of software applications and devices developed to provide information, diagnose conditions, and manage diseases has increased substantially, demonstrating the importance of personalized care for patients.Footnote 1 Moreover, the development of emerging technologies, like artificial intelligence, is giving devices and applications even more potential for health monitoring and predictions based on the vast amounts of data being gathered from individuals through smartware and fitness apps.Footnote 2 The adoption of all these new products is not only being welcomed as a means to increase the efficienc…
1 Introduction
The digitalization process today has become a pervasive part of all sectors of the economy and society, with the health sector being no exception. The number of software applications and devices developed to provide information, diagnose conditions, and manage diseases has increased substantially, demonstrating the importance of personalized care for patients.Footnote 1 Moreover, the development of emerging technologies, like artificial intelligence, is giving devices and applications even more potential for health monitoring and predictions based on the vast amounts of data being gathered from individuals through smartware and fitness apps.Footnote 2 The adoption of all these new products is not only being welcomed as a means to increase the efficiency and effectiveness of healthcare, but also as a way to tailor medical care to the distinctive molecular and genetic traits of individual patients.Footnote 3 At the individual level, these new devices are acting as tools that empower patients to decide on their own health.Footnote 4
However, at the sector level, many medical devices and applications are simply grouped and categorized under a single concept, which is not particularly reflective of the legal frameworks present in the EU. For example, in one study, the European Parliament widely defines “medical AI” as a “type of AI which is focused on specific applications in medicine or healthcare”.Footnote 5 The study then goes on to specifically refer to AI solutions for clinical practice, biomedical research, public health, and health administration.Footnote 6 Therefore, such a definition extends well beyond the qualification of medical devices. Hence, in this chapter, I will use the terminology set out in the Medical Device Regulation (MDR),Footnote 7 which defines “medical devices” as those devices and software applications that are intended to be used for one of the following objectives…
… diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease; diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability; investigation, replacement or modification of the anatomy or of a physiological or pathological process or state; and providing information by means of in vitro examination of specimens derived from the human body, including organ, blood and tissue donations, and which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body, but which may be assisted in its function by such means.
Additional indications are also listed in the ‘guidance documents’ of several regulations.Footnote 8 Although the list of purposes is extremely detailed and covers a wide range of cases, it still leaves space for devices and applications with a more general health objective—for instance, objectives that are not connected with a disease, injury, or a disability but rather simply aimed at ensuring the wellbeing of the user. This last category will be the focus of this analysis.
There are many examples of health devices and applications outside the realm of medicine—for instance, nutrition tracking applications that help people monitor their food consumption, calculate nutrients or count calories for a balanced diet. The most advanced applications contain embedded AI systems where you can identify the components of food dishes via an image recognition system.Footnote 9 Other examples include apps for menstrual tracking, where women can monitor their bodily functions and reactions to birth control and fertility conditions. Again, some come with embedded AI-based chatbots that provide personalized replies about health issues, fitness programs, and nutrition plans. Notably, these apps have raised several concerns regarding ethical issues and data protection.Footnote 10
Another example at the boundary between health and medical devices is the case of social care or assistive robots.Footnote 11 These types of robots help users with a range of task, both physical and social.Footnote 12 As an example, they can be used to conduct educational and recreational activities for elderly people to help keep them company, avoid depression, or even monitor for signs of Alzheimer’s before the disease is even diagnosed.Footnote 13 The robots can have varying degrees of autonomy and may integrate several different technologies from sensor systems to algorithms for processing data to cloud computing services and AI systems designed to react, chat, and respond to human emotions.
These three examples show the pervasiveness of health devices and applications outside the medical settings. But their use in domestic settings, such as in households and on personal devices, requires attention in terms of the security vulnerabilities these devices can create—not just in terms of personal data breaches but also in terms of physical safety. According to Fosch and Mahler,Footnote 14 the type of impact that a cybersecurity incident with a social care robot may have depends on several factors: the origin of the incident, the target, the effect on the device, and the effect on the user. For instance, a malicious attack launched via a social care robot may target the device itself through a fault injection that partially damages the internal control system. Consequently, the robot may no longer react to requests or it could trigger unexpected behaviour. This could result both in physical and psychological harm to the user.
It is easy to transpose this analysis to the other examples mentioned above. For instance, a malicious attack could allow a third party to take control of the application, compromising private information. Although this type of attack may have limited physical impact, the incident may cause mental or emotional harm to the user.Footnote 15
As reported elsewhere,Footnote 16 the legal framework for cybersecurity in the EU is still piecemeal. Protection is only afforded through various interactions between different legal Acts that not only address cybersecurity but also numerous other facets of the digital world. For example, there is the Cyber Resilience Act,Footnote 17 which covers the cybersecurity requirements of connected products; the General Data Protection Regulation,Footnote 18 which covers data processing security; the recent Artificial Intelligence Act (AI Act),Footnote 19 which governs the security of embedded systems; and the European Health Data Space Regulation, which pertains to certain security aspects of wellness devices and applications.Footnote 20
The purpose of this chapter is to shed light on the interplay between these pieces of legislation so as to provide a holistic understanding of cybersecurity in the context of healthcare applications and devices. The ensuing analysis will clearly describe the challenges manufacturers face in complying with the EU’s regulatory provisions both in terms of physiological and pathological situations. The chapter ends with a discussion on the complexity of this regulatory landscape, which I argue should be simplified to ensure better coordinate and systematise all the stakeholders involved.
2 Applicable Legal Frameworks
2.1 The General Data Protection Regulation
The General Data Protection Regulation (GDPR) is the main framework covering the processing of health data. Further, according to the definition provided in Article 4 (15), the scope of this legislation is extremely wide.Footnote 21 This is also confirmed by Recital 35, which clarifies that health data covers a subject’s past, current, and future health conditions.Footnote 22
The three examples provided in the previous section all fall under the scope of the GDPR. More specifically, the social care robot collects and processes data related to a person’s ability to react to queries. It also processes data to acknowledge improvements or deteriorations in their user’s abilities. Although not all data collected qualifies as health data, they are still related to the health status of the subject. Similarly, images of food collected by the nutrition tracking application are processed to reveal information about a person’s dietary habits. Period tracking applications collect information about sleep, mood, and sexual behavior. Consequently, such data allows the embedded system to infer health conditions and even create a profile of the user.
According to Article 32 of GDPR, devices that collect personal data should comply with certain security requirements. These requirements include technical and organizational measures aimed at guaranteeing “a level of security appropriate to the risk”. This risk-based approach relegates the task of identifying which measures are appropriate to the data controller, who must consider a specific set of criteria when making their decisions. These include: the current state-of-the-art and best practices in data and communications security, such as industry standards and certifications, the costs of implementation, the nature of the data processing, and reasonably foreseeable threats to the rights and freedoms of individuals.Footnote 23 Thus, given that health data qualify as a special category of data, data processing tends to carry a higher degree of risk and, consequently, data processing tasks typically require a higher level of security.Footnote 24
Although the ultimate choice over which security measures to implement is left to the data controller, the legislation does provide some guidelines on the types of measures that could be implemented. These span both data-specific measures, such as pseudonymization and encryption, along with more general security measures, such as safeguarding confidentiality, protecting the integrity of the data, and ensuring that the processing systems are resilient. It is also worth noting that the European Data Protection Board has published some guidance documents.Footnote 25 However, the technology is evolving so rapidly that the value of these guidance documents is somewhat limited.Footnote 26
References to the state-of-the-art imply the possibility of adopting standards developed at the international or European level, assuming that these reflect the most basic knowledge shared by the technical community and are therefore updated more often.Footnote 27 Note that, according to Micklitz,Footnote 28 the ‘state of the art’ refers to the lowest levels of knowledge in a research field—i.e., it relates to findings that are generally accepted by the whole scientific community. This level is the one considered in the international standards. The state of science and technology sit at a higher level, which is then followed by pure scientific findings and scientific knowledge. In particular, scientific knowledge addresses research that has not yet become established wisdom and, therefore, might not have yet found its way into consumer-ready technological devices. For instance, existing international standards include the OWASP’s Software Assurance Maturity Model,Footnote 29 the Building Security In Maturity Model,Footnote 30 and the NIST Privacy Framework.Footnote 31 These standards provide a detailed analysis of the different phases through which data security strategies can be adopted, endorsing a recurring process that evaluates the maturity level of the company adopting them. However, adopting any of these standards does not guarantee compliance with Article 32. Moreover, it does not exclude liability in case of damages.
First, Article 32(3) of the GDPR limits the opportunity to prove compliance with “an approved code of conduct […] or an approved certification mechanism”. The two cases listed in the provision literally do not include international standards, leaving only the possibility of relying on certifications at European level.Footnote 32 However, according to Article 32(1), international standards become relevant when looking at the state-of-the-art in a specific field.Footnote 33 Consequently, a manufacturer that decides to adopt one of these standards will still have to justify their choice on the basis of the other criteria; namely, the cost of implementation, the nature of the data processing, reasonable foresight of a threat, and so on. Such evaluations are crucial if a data breach occurs.Footnote 34
The GDPR also provides a set of rules for what to do in the event of a data breach. First, the data controller needs to notify their relevant national data protection authority without undue delay and, when feasible, within 72 hours of the controller becoming aware of the breach.Footnote 35 That a data breach has occurred does not automatically imply that the controller has not complied with the provisions of the GDPR.Footnote 36 However, the data protection authority will evaluate whether any measures taken were appropriate and sufficient to prevent or mitigate the breach on a case-by-case basis. According to a recent study on the decisions made by the various data protection authorities,Footnote 37 there does not seem to be a common methodology; although, some important elements are highlighted. For example, data controllers should be conducting regular risk management exercises to test their security measures. Those responsible for data processing should also be using the most up-to-date tools and software. Lastly, the processing procedures should include authentication measures and logs should be kept to record all activities associated with the data.
2.2 The Artificial Intelligence Act
The AI Act is the result of a long negotiation process aimed at establishing a legal framework for protecting the values, fundamental rights, and moral principles of EU residents when designing, developing, and deploying AI-based systems.Footnote 38 These principles are listed as including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, and social and environmental well-being.Footnote 39
The AI Act adopts a risk-based approach that distinguishes three types of AI systems: the ones bearing (i) an unacceptable risk; (ii) a high risk; and (iii) a low or minimal risk. The first category is prohibited, as it includes AI systems whose risks contravene EU values or violate fundamental rights. The bulk of the regulation is focused on high-risk AI systems, which are defined based on their intended purpose and modality of use—for instance, AI systems intended to be used as safety components within other products or stand-alone AI systems with implications to fundamental rights.
The AI Act explicitly acknowledges that AI-based systems can be embedded into medical devices.Footnote 40 As such, the legislation prescribes that any medical devices classified as high risk under the MDR will have the same classification under the AI Act, if they have an AI component. This implies that some devices classified as low-risk under the MDR or In Vitro Device Regulation (IVDR) will fall under the limited risk category of the AI Act.Footnote 41 However, these classifications become less clear when one looks at non-medical devices, such as our three health device examples. Healthcare devices may still fall under the scope of the AI Act when looking at the list of cases in Annex III of the regulation. For instance, social robots are generally equipped with sensors, cameras, and microphones that are not only able to identify types of movements and facial expressions but can also be used to infer human emotions.Footnote 42 Notably, emotion recognition is explicitly included in Annex III, point 1, (c).Footnote 43 Palmieri, however, suggests a different interpretation of the Act, suggesting that social care robots might be subject to the Machinery Regulation as such AI systems may actually qualify as a product or safety component.Footnote 44
More doubtful is the application of the AI Act to the other two examples. Neither the nutrition tracking app or the menstrual tracking app contain emotion recognition and none of the other cases listed in Annex III apply. Thus, these examples would fall into the low-risk category defined in Article 50. Here, manufacturers of AI systems “intended to interact directly with natural persons” are required to inform the users interacting with such an AI-based system. However, this obligation may be circumvented if the interaction with the AI system is “obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use”.Footnote 45 Apart from this disclosure, and the general obligation regarding AI literacy envisaged in Article 4, the Act applies no additional requirements to the design, development, or deployment of the AI system. Interestingly, this interpretation applies to a wide range of non-medical devices that still provide a health function, not only from a cybersecurity perspective but more generally in terms of the overall objective of building a landscape populated with trustworthy and secure AI systems.Footnote 46
For those applications that fall into the scope of high-risk AI systems, the AI Act sets out a suite of legal requirements, including a risk management system (Article 9), data governance (Article 10), documentation and record-keeping (Articles 11 and 12), transparency and provision of information to users (Article 13), human oversight (Article 14), and robustness, accuracy and cybersecurity (Article 15). This last provision addresses the cybersecurity perspective by distinguishing between different, yet intertwined perspectives.
Article 15 requires that AI systems be resilient to any “errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular, due to their interaction with natural persons or other systems”. This clause mainly refers to the robustness of the AI system. Additionally, the system must be protected “against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities”. This refers to cybersecurity. It is important to stress that the approach adopted in this provision does not clearly converge with the wider definition of cybersecurity, which involves defending against any threats designed to compromise a computer system or the security of the information it holds, without distinguishing between threat types and sources.Footnote 47 In fact, Article 2(1) of the Cybersecurity ActFootnote 48 defines cybersecurity as “the activities necessary to protect network and information systems, the users of such systems, and other persons affected by cyber threats”. As clearly described by Nolte et al.,Footnote 49 the distinction between robustness and cybersecurity is puzzling as the former is not a distinct feature but rather a dimension of the latter. Moreover, the provision increases the dichotomy between these two overlapping concepts. For one, the Act demands different thresholds for each of the concepts. Second, different measures need to be adopted to ensure robustness on the one hand and cybersecurity on the other. As a case in point, Article 15(4) requires that any AI systems should be “as resilient as possible”, while Article 15(5) also requires that AI systems should “be resilient” in terms of cybersecurity. Moreover, to ensure robustness. i.e., resilience, the manufacturer should adopt both organizational and technical measures, whereas, to ensure cybersecurity, only technical measures need to be adopted.Footnote 50
This distinction has also been translated into a request for standardization that the EU Commission presented to CEN/CENLEC. The Commission wanted to draft a European standard that would provide guidance to the manufacturers of AI systems on how to comply with the requirements of the Act.Footnote 51 Article 40 states that any AI system complying with the harmonized standards shall be presumed to conform to the requirements set out in Articles 9–15. Additionally, Article 41(2) provides for coordination between the compliance with cybersecurity certification schemes adopted pursuant to the Cybersecurity Act and compliance with art. 15.Footnote 52
Once an AI system has passed its conformity assessment, it can be put on the market with the CE marking. Notably, according to Article 43 of the AI Act, the manufacturer of any AI system described under point 1 of Annex III can choose between a self-assessment (defined in Annex VI) or a conformity assessment to be conducted by a third-party called a Notifying Body (as described in Annex VII). But the obligations do not stop there. In the post-market surveillance phase, manufacturers still have obligations to ensure that they continue to comply with the requirements of the AI Act. For example, Article 72 prescribes that the manufacturer establish and document a post-market monitoring system that operates throughout their product’s lifetime. From a cybersecurity perspective, this monitoring system must check to determine whether any vulnerabilities have or are likely to emerge and, if so, whether those vulnerabilities will impact the device’s safety or performance. Accordingly, manufacturers have to create processes to provide users with mitigation measures, such as issuing patches or updates. Moreover, manufacturers are expected to adopt an incident response plan with clear protocols for addressing cybersecurity events and notifying stakeholders of potential mitigation measures when necessary.
In case of a serious incident, the manufacturer must inform their national market surveillance authority within 15 days of becoming aware of the event (or from the moment that a causal link between the event and the AI system has reasonably been established). Article 2(49) defines a serious incident as…
… an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person, or serious harm to a person’s health; (b) a serious and irreversible disruption of the management or operation of critical infrastructure; (c) the infringement of obligations under Union law intended to protect fundamental rights; (d) serious harm to property or the environment.Footnote 53
Additionally, different timelines are provided in case of a person’s death (10 days) versus the case of a widespread infringement or a serious incident leading to irreversible disruption of critical infrastructure (2 days). Along with notifying the authorities, the manufacturer will also need to conduct investigations and eventually take corrective action to resolve any harm caused.
2.3 The Cyber Resilience Act
Increased awareness at the European level over the security risks emerging from rising levels of interconnectedness between devices triggered the adoption of the Cyber Resilience Act (CRA). This issue was first identified in the EU’s Cybersecurity Strategy for the Digital Decade presented in 2020.Footnote 54 At that time, the European Commission presented a legislative intervention aimed at preventing and mitigating the impact of malicious attacks designed to exploit vulnerabilities in any product that uses a remote data connection to a device or network, whether that device operates within a closed, dedicated network or can be accessed via the Internet. As a matter of clarity, Article 3(2) of the CRA defines remote data processing as “data processing at a distance for which the software is designed and developed by the manufacturer […] and the absence of which would prevent the product with digital elements from performing one of its functions”.
Meanwhile, Article 1 of the CRA defines the scope of the regulation as ensuring the cybersecurity of ‘products with digital elements’ made available on the European market. Article 3(1) goes on to define such products as “any software or hardware product and its remote data processing solutions, including software or hardware components to be placed on the market separately”. It is important to stress that the scope of the regulation excludes products with digital elements that fall under the definition of medical devices.Footnote 55 This is an important distinction that allows the manufacturers of medical devices to focus solely on the safety and security requirements prescribed in the MDR. Still, our three examples fall under the scope of the CRA. The social care robot, which processes data and connects to the cloud, can be accessed by the manufacturer to verify the device is operating correctly or by a medical expert to check the conditions of the patient. As such, the robot is clearly a connected product. Similarly, the nutrition and the period tracking apps are not only connected to the cloud in order to store, collect, and process data, but they are also able to connect directly to other devices. For instance, the nutrition-tracking application might be connected to a smartwatch or even a smart fridge to identify the user’s dietary habits. Similarly, the period-tracking app can be shared with a partner’s smartphone to share decisions about pregnancy.
Further, these products must comply with both the pre-market and post-market requirements set out in the CRA. The Act addresses both the design and development phases, as well as the monitoring and updating activities carried out after the product is available on the market.
Manufacturers are required to design, develop and manufacture any product with digital elements in such a way as to ensure an appropriate level of cybersecurity; this necessarily means that no known exploitable vulnerabilities should be present and that secure default configurations should be available. Such obligations also extend across the supply chain, as the manufacturer is responsible for any third parties that participate in the design or development of any part of the products—whether they do so collaboratively or independently.Footnote 56 The basic requirements are set in Annex II and include stipulations related to both security and handling vulnerabilities. In terms of security, the list consists of “security-by-default” measures, confidentiality protection, integrity, the availability of the data and the networks, data minimization measures, resilience measures (particularly against DoS attacks), safeguards against network effects, records of internal activity, and data portability. Additionally, products should be protected from unauthorized access through appropriate control mechanisms. Plus, the data should be kept confidential through methods likes encryption, etc.
As emerges clearly from the list in Annex II, the requirements are not sufficiently detailed to identify all the technical and organizational measures that should be adopted at each stage of a product’s design and development. However, the Annex does set out a preliminary framework that can be further defined through risk management systems—especially those that evaluate the readiness of products towards different types of cybersecurity threats.Footnote 57 Already, the technical documentation that manufacturers need to prepare before any products can be placed on the market must contain an assessment of any relevant cybersecurity risks and a description of the means by which the manufacturer intends to mitigate those risks. This documentation is not only passed on to the Notified Body as part of the third-party assessment procedure but must also be kept up-to-date by the manufacturer and provided to the market surveillance authority on request.
The role of standards is also crucial here, and the CRA directly acknowledges that fact. Article 27 indicates two possible approaches to standardizing the field. The first is to have the EU Commission ask European standard-setting organizations to draft harmonized standards. The second is for each manufacturer to adopt of common specifications drafted by the EU Commission. However, this second option is only a fallback situation if the first one is not accepted. The first option is more reliable and transparent, as the process envisaged for developing European standards is based on consensus, openness, transparency, national commitment, and technical coherence principles.Footnote 58
At the end of the conformity assessment procedure, the manufacturer should receive a “Declaration of Conformity” for its product. This is simply a document that confirms the product complies with the requirements set out in Annex II.Footnote 59 Depending on the class of risk of the product, the type of conformity assessment may vary, ranging from an internal control procedure to a full conformity assessment based on quality assurance. The examples considered by this analysis are excluded from the possibility of adopting an internal control procedure. According to Article 7(2) of the CRA, certain products are deemed to be “important”. These important products are defined as those with digital elements that…
… perform a function which carries a significant risk of adverse effects in terms of its intensity and ability to disrupt, control or cause damage to a large number of other products or the health and safety of a large number of individuals through direct manipulation, such as a central system function, including network management, configuration control, virtualization, processing of personal data.
Point (19) of Annex III goes on to specify the following as a Class I important product: “personal wearable products to be worn or placed on a human body that have a health monitoring (such as tracking) purpose and to which Regulation (EU) 2017/745 or (EU) No 2017/746 do not apply”. Therefore, wearable health devices are subject to the conformity assessment procedures referred to in Articles 32(2) and (3) of the CRA—this being the third-party assessments evaluating conformity conducted by a Notified Body. All these Articles, Annexes, and Points serve to guarantee that the compliance certifications procedures are both transparent and conducted according to harmonized standards.
Moreover, the manufacturer’s obligations do not end after a Declaration of Conformity is issued. Once compliant, the product is subject to post-market surveillance controls by specifically defined authorities, which can take all appropriate corrective actions to bring it (back) into compliance, withdraw it from the market, or recall it within a reasonable period. Further, manufacturers are also obliged to notify their national Computer Security Response Team (CSIRT) of any exploited vulnerabilities or incidents impacting the product’s security within 24 hours of becoming aware of such events. CSIRTs are national bodies created according to NIS 2 DirectiveFootnote 60 and designated to ensuring measures that create a high common level of cybersecurity across the Union. Such notifications should provide all essential information about the event along with any corrective and/or mitigating measures taken. This should allows the informed CSIRT to provide appropriate support to resolve the event while also disseminating relevant information about cybersecurity incidents to the various market surveillance authorities and, other national CSIRTs—especially in the case of cross-border cybersecurity events. Users might also be informed but, generally, only where their collaboration is required to deploy corrective measures.
2.4 The European Health Data Space
The Regulation governing the European Health Data Space (EHDS) was first presented in May 2022 and adopted in January 2025. There are two main purposes to this Regulation. The first is to expand the use of electronic health data so as to deliver health care to individuals by collecting data from them. The second is to assist and improve research, innovation, policymaking, patient safety, personalized medicine, official statistics, and regulatory activities by sharing health datasets. Both of these objectives should be able to be achieved by creating a ‘data space’ where health data are stored, managed, and operationalized to support research, overcome the challenges of limited data interoperability, and bridge the historically fragmented rules allowing access to data for scientific purposes. This Regulation also imposes barriers on individuals who wish to access or exercise control over their health data.Footnote 61
The EHDS Regulation is relevant to all three of our examples as each meets the definition of a wellness application. Article 2(2)(ab) defines this as…
… any software, or any combination of hardware and software, intended by the manufacturer to be used by a natural person, for the processing of electronic health data, specifically for providing information on the health of natural persons, or the delivery of care for purposes other than the provision of healthcare.
This definition acknowledges the fact that wellness applications may collect several types of data that can be used to infer the health conditions of the user. It is important to highlight that wellness applications have been purposefully included within the scope of the EHDS so as to incorporate any data collected through interoperable exchange systems, i.e., electronic health record (EHR) systems. In fact, the EHR system was put into place in 2008 to encourage the shift from paper-based patient documentation to digital. Ultimately, the objective is to make each patient’s EHR accessible all throughout Europe. However, this goal has not yet been achieved as many of the information systems used at the national level are incompatible both format-wise and in terms of the technical standards used.Footnote 62
Notably, the manufacturers of wellness applications are under no mandatory obligations to certify their products. The only exception is if the manufacturer wants to include their product within the infrastructure of electronic health data for primary uses. Under these circumstances, the manufacturer will then be required to comply with the requirements outlined in Article 30 and Annex II of the EDHS Regulation.Footnote 63 These requirements pertain to processing data in a way that ensures the data are safe and secure; that no unauthorized access is possible; that identification and authentication mechanisms (including role-based controls) are robust; that activity involving the data are logged in accessible records; that access to personal data is restricted; that digital signature mechanisms are in place; and that data retention periods are advised to relevant stakeholders. To receive a “Declaration of Conformity”, the manufacturer only needs to conduct a self-assessment. But, although this idea of self-assessment is designed to enhance the autonomy and responsibility of the manufacturer, it does carry the risk of producing low-reliability evaluations.Footnote 64
In the EDHS Regulation, monitoring and enforcement powers are allocated to the various market surveillance authorities.Footnote 65 These bodies that are in charge of verifying compliance with essential requirements after a wellness application is put on the market.Footnote 66 Yet a step in the system of control is missed here as wellness applications seem to be excluded from any obligations to notify anyone of a serious incident. Serious incidents are defined as …
… any malfunction or deterioration in the characteristics or performance that directly or indirectly leads, might have led or might lead to the death of a natural person or serious damage to a natural person’s health; a serious disruption of the management and operation of critical infrastructure.
That said, Article 44 of the EHDS Regulation prescribes that any manufacturer who contributes to an EHR is required to notify no later than 15 days after becoming aware or immediately after establishing a causal link between the application and the serious incident (or the reasonable likelihood of such link). Conversely, Article 47 of the Regulation focuses on the interoperability perspective and, if this narrow interpretation is adopted, the offending wellness application does not have any obligation to conduct subsequent monitoring or correct its harm in the case that the product does affect the health and safety of patients.Footnote 67
3 The Way Forward Between Overlapping Obligations
The above overview shows that the current legal frameworks applicable to health devices that fall outside the definition of medical devices is still extremely piecemeal. They variously impose different requirements and notification obligations that are overlapping, if not duplicating, depending on the type of event affecting the security of the device or application.
As depicted in Table 1, the cybersecurity requirements applicable are scattered across the different regulations, leaving the task of identifying the more detailed standards that might ensure compliance to the manufacturers. From the perspective of technological neutrality, this approach might be justified given that legislative requirements can quickly become irrelevant in the face of rapid technological developments. However, this approach also paves the way for a process of standardization that strongly relies on the ability of the standard-setting bodies, whether international or European, to translate the principles and general definitions outlined in the legislation into operational requirements.Footnote 68
From another perspective, assessing compliance with the selected standards will fall under the purview of the authorities or Notified Bodies indicated in each legislative instrument. Except for the case of data protection, where no ex-ante assessment by third parties is needed, plus the case of self-assessment, Notified Bodies will have the task of verifying that any product placed on the market complies with all the various regulations in place.
Moreover, in a pathological situation, i.e., when an incident which may affect the device’s functioning occurs, manufacturers may have to clearly identify the event according to the different definitions of what constitutes an ‘incident’. Consequently, the relevant authorities may need to be notified as soon as the event is classified, as Table 2 shows.
This obligation to notify the authorities has a twofold objective. First, in the short term, the aim is to discover attacks and identify underlying vulnerabilities so as to allow a coordinated response to a cybersecurity incident. However, in the long term, the aim is to improve the level of security over data and enhance cyber preparedness.Footnote 69 The underlying assumption, then, is that acquiring information related to a cybersecurity incident could help to reveal other (potential or ongoing) incidents involving other actors in the sector. Yet, if this is the case, duplicating the notification obligations to different authorities without also imposing information-sharing obligations reduces the likelihood of achieving such results.
This picture clearly shows the results of stratifying different sets of regulations. Although the objectives of each piece of legislation are legitimate, a lack of coordination has paved the way for overlaps and duplication. This critical appraisal has also been acknowledged by the EU Commission, which, in its latest work programmeFootnote 70 included among the objectives a proposal “to remove inefficient requirements for paper formats in product legislation and build synergies and consistency for data protection and cybersecurity rules”. It remains to be seen how this will be achieved, as coordination should consider several aspects. One crucial element will be the compliance assessments, which are delegated to the Notified Bodies. Here, it will be important to ensure that the expertise and competence of the Notified Bodies are up to the task. As a matter of fact, it is more than probable that such bodies will be providing services across different technologies and sectors. Therefore, their expertise and knowledge will be crucial to provide a reliable assessment as regards compliance with the legislation.Footnote 71
Moreover, the legislation places different authorities in charge of reporting tasks. Yet centralizing the notification procedures through one unified platform could simplify a manufacturer’s obligations in case of incidents. Such a platform could also help the different authorities communicate and coordinate their efforts depending on the type of event. It could also help with any subsequent monitoring required, and even be used to sanction activities. Note that the initial proposal outlining the CRA originally included a step toward centralization. Here, the notification mechanism was envisaged as a central point of reference for collecting notifications across all EU member states. Further, the national CSIRT bodies only needed to be informed in a limited number of cases. However, this approach was later modified to where the role of ENISA became ancillary.Footnote 72 A less ambitious intervention would be a coordination effort to create a common but flexible template for notifications that could bend to different types of incidents but contained all the information any notification body required. This would allow manufacturers to avoid duplicating effort while ensuring all information was reported consistently.
In any case, the choices to achieve a simplified and more consistent cybersecurity framework must consider the impact on all the actors involved, from the manufacturers to the Notified Bodies to the market surveillance authorities and beyond.
References
Amelang K (2022) (Not) safe to use: insecurities in everyday data practices with period-tracking apps. In: Hepp A, Jarke J, Kramp L (eds) New perspectives in critical data studies: the ambivalences of data power. Springer International Publishing, Cham, pp 297–321. https://doi.org/10.1007/978-3-030-96180-0_13
Chapter Google Scholar
Biasin E, Yaşar B, Kamenjašević E (2023) New cybersecurity requirements for medical devices in the EU: the forthcoming European Health Data Space, Data Act, and Artificial Intelligence Act. Law Technol Hum 5:43–58. https://doi.org/10.5204/lthj.3068
Article Google Scholar
Burden H, Stenberg S, Flink K (2024) When AI meets machinery – the role of the notified body. Rise rapport 56
Google Scholar
Burri M, Zihlmann Z (2023) The EU Cyber Resilience Act – an appraisal and contextualization. Zeitschrift für Europarecht (EuZ):B1–B45
Google Scholar
Burton C (2020) Article 32 Security of processing. In: Kuner C, Bygrave LA, Docksey C, Drechsler L (eds) The EU General Data Protection Regulation (GDPR). Oxford University Press, New York, pp 630–639. https://doi.org/10.1093/oso/9780198826491.003.0068
Chapter Google Scholar
Busch F, Kather JN, Johner C, Moser M, Truhn D, Adams LC, Bressem KK (2024) Navigating the European Union Artificial Intelligence Act for Healthcare. npj Digit Med 7:1–6. https://doi.org/10.1038/s41746-024-01213-6
Article Google Scholar
Cantero Gamito M (2018) Europeanization through standardization: ICT and telecommunications. Yearb Eur Law 37:395–423. https://doi.org/10.1093/yel/yey018
Article Google Scholar
Carovano G, Meinke A (2023) Improving Fairness and Cybersecurity in the Artificial Intelligence Act
Google Scholar
Casarosa F (2024a) Cybersecurity of Internet of Things in the health sector: understanding the applicable legal framework. Comput Law Secur Rev 53:105982. https://doi.org/10.1016/j.clsr.2024.105982
Article Google Scholar
Casarosa F (2024b) European Health Data Space – is the proposed certification system effective against cyber threats? Eur J Risk Regul:1–11. https://doi.org/10.1017/err.2024.25
Casarosa F (2024c) L’armonizzazione degli obblighi di notifica: il DDL Cybersicurezza verso la NIS 2. Rivista italiana di informatica e diritto 6:4–4. https://doi.org/10.32091/RIID0146
Google Scholar
Chen T, Carter J, Mahmud M, Khuman AS (eds) (2022) Artificial intelligence in healthcare: recent applications and developments, brain informatics and health. Springer Nature Singapore, Singapore. https://doi.org/10.1007/978-981-19-5272-2
Book Google Scholar
Chiara PG (2023) Il Cyber Resilience Act: la proposta di regolamento della Commissione europea relativa a misure orizzontali di cybersicurezza per prodotti con elementi digitali. Rivista italiana di informatica e diritto 5:11. https://doi.org/10.32091/RIID0108
Google Scholar
Corrales Compagnucci M, Wilson ML, Fenwick M, Forgó N, Bärnighausen T (eds) (2022) AI in eHealth: human autonom