- How AI enhances DLP through behavioral analytics, NLP, and predictive risk detection
- The role of DSPM in continuous PHI discovery, classification, and policy enforcement
- How to manage shadow AI and generative AI risks in healthcare
- How to mitigate adversarial AI attacks that target security controls
- How to align AI-driven security with HIPAA, HITECH, GDPR, and HITRUST frameworks
Prefer to access this resource without an account? Download it now.
Best For:
- Healthcare Security Architects
- Privacy and Compliance Officers
- Data Protection Officers
- Cloud Security Engineers
- Risk and Governance Professionals
- Healthcare IT Leaders
Introduction
AI-powered security is entering a new chapter defined not just by generating content but by taking independent, goal-driven actions. Amid ongoing talent shortages, organizations are
increasingly leveraging AI to enhance the productivity of existing resources. At the same time, AI is being adopted as a critical tool to strengthen cybersecurity defenses, enabling faster threat detection, proactive risk management, and more efficient incident response. This publication will focus on the growing trend of AI and its implications on Data Loss Prevention (DLP) and Data Security Posture Management (DSPM) within Healthcare. By combining these solutions utilizing AI, healthcare organizations can strengthen their cybersecurity posture, reduce the risk of data breaches, and maintain patient confidence. AI can enhance DLP functionality through intelligent pattern recognition and behavioral analytics, improving visibility and control over data movement. This enables companies to continuously evaluate, monitor, and enhance their data security posture. In addition to data controls, healthcare organizations must consider application-layer risks. Vulnerabilities identified through Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), Software Composition Analysis (SCA), Secrets Scanning, and Development, Security, and Operations (DevSecOps) practices often represent the initial access vector for PHI exposure. Integrating application security telemetry with DLP and DSPM provides a unified approach to detecting and mitigating data exfiltration across the full application lifecycle.
Healthcare providers are integrating AI in the medical field to improve efficiency, address staff
shortages, and enhance patient care.
-
92% of healthcare leaders agree that automation is critical for addressing staff shortages, showing how AI in public health can play a role in improving efficiency.
-
75% of leading healthcare companies are experimenting with or planning to scale Generative AI use cases.
-
43% of healthcare leaders are already using AI for in-hospital patient monitoring.
-
40% of healthcare providers reported improved efficiency due to AI solutions.
-
92% of healthcare leaders believe Generative AI improves operational efficiency, while 65% see it as a tool for faster decision-making.1
DLP and DSPM in Healthcare
The detection factors in traditional DLP solutions rely on static rules and require manual intervention to classify typical data, and even less so to determine who, and what, should have access to it from a DSPM perspective. If security teams have limited resources, implementing this at scale would be very challenging. Many organizations start a steering committee with the business to drive a classification program. Security teams won’t be able to classify data without context; they rely heavily on data owners to drive data classification efforts.
Data Loss Prevention (DLP) refers to measures that prevent the accidental or unauthorized loss or exposure of sensitive information, such as patient records or PHI. DSPM is a broader trend toward proactive, continuous management of how sensitive data, especially PHI, is stored, accessed, and protected. Unlike traditional DLP which focuses on prevention of specific data loss events, DSPM provides continuous assessment and management of an organization’s overall data security posture. Both solutions complement one another in an era of increasing cyberattacks and regulatory scrutiny; DLP and DSPM are becoming essential functionalities for modern healthcare security teams. While DSPM empowers healthcare organizations to understand where PHI resides, control who can access it, and ensure it’s adequately protected, DLP, on the other hand, takes action to inspect the data based on PHI policies and can either audit, block, encrypt, or quarantine the data before it exits the organization.
DLP and AI
The ability of AI to process vast amounts of data, recognize intricate patterns, and generate
predictive models have opened new avenues for research and development in healthcare.
Figure 1: Source, The State of Non-Human Identity Security
Traditional DLP relies on signature-based detection and keyword matching, which often produces high false positive rates, flagging harmless content as risky. AI-Driven DLP adds intelligence and adaptability, making it easier to collect, discover, and analyze large volumes of data. It significantly improves accuracy through sophisticated pattern recognition that spots complex PHI patterns even in de-identified data, going far beyond simple keywords. AI-driven DLP also grasps context and distinguishes PHI from lookalikes, such as medical terms in healthcare marketing materials. It analyzes structured and unstructured data stored in diverse formats, such as computer databases, clinical notes, texts, emails, images of printed medical records, and embedded metadata, to uncover hidden, sensitive details. By automatically discovering, analyzing, and classifying sensitive data, healthcare organizations can strengthen their cybersecurity posture, reduce the risk of data breaches, and maintain patient trust. The discovery dashboard, for example, provides high-level visibility into the contents through AI and Machine Learning. The recognition of sensitive data (e.g., medical information) within your organization, the application type (i.e., URL), and the user who attempted it, is fully visible. This represents a step forward from the traditional method, in which DLP would require manual discovery and an individual to tag the necessary files, thereby empowering data-driven decision-making and simplifying the task.
Figure 2: Source, ZScaler Discovery Dashboard
Security teams have increased confidence by augmenting rules-based DLP with behavioral AI to detect accidental and intentional Data Loss. The healthcare industry holds a vast amount of personal and sensitive information, making it an attractive target for cybercriminals. AI systems identify unusual data access and transfer patterns by building machine-learning profiles of normal user behavior and tracking typical data access patterns, transfer locations, and timing. It then detects real-time deviations, such as an IT administrator suddenly accessing clinical research databases outside of hours and attempting to download massive amounts of data to an unknown location. AI studies patterns that detect the sending of sensitive data to unintended recipients or employees, as well as the unlawful transfer of data to themselves or unauthorized parties. This dynamic approach enables organizations to detect and respond to potential data breaches and sensitive data leakage in real time.
The image below depicts an AI algorithm designed to detect emails sent to unauthorized recipients or potential insider threats, in which intellectual property is being exfiltrated through their personal accounts.
Figure 3: Source, Proofpoint Adaptive DLP Solution
DLP Functions & AI-Advantages
| Function | AI-Driven Advantage |
|---|---|
| Anomaly Detection | AI can monitor user behavior and flag unusual activities, such as: IT/Billing Admin, accessing clinical research databases Off-hours data access from non-typical location or IP address Correlate external factors like job termination/ voluntary separation with data access patterns Automatically classify documents based on content (e.g., contracts, medical records) Detect sensitive data even if it’s not explicitly labeled |
| Content Classification | Machine learning models can: Automatically classify documents based on content (e.g., contracts, medical records) Detect sensitive data even if it’s not explicitly labeled |
| Natural Language Processing (NLP) | AI can use NLP to: Understand context from multiple sources in different formats (both structured and unstructured data), such as databases, emails, chat, or scanned documents Identify intent (e.g., malicious insider trying to leak data subtly) |
| Predictive Analytics | AI systems can: Predict risky behavior based on patterns Alert security teams before a breach occurs |
Table 1: DLP Capabilities and AI-Driven Advantages for Healthcare Data Protection
“As a healthcare company, one of the company’s top priorities is protecting patients’ private information. In any organization, the cost of data loss can be high. But in a highly regulated industry such as healthcare, losing patient information can cause serious, lasting harm.”2
DSPM and AI
AI-driven DSPM applies security around sensitive data, enhancing the capability to manage data risk proactively and at scale.
The healthcare sector faces unique challenges due to the Health Insurance Portability and Accountability Act (HIPAA) compliance, PHI handling, and an expanding threat landscape. Data complexity and visibility into the data landscape are the biggest challenges organizations often face, given the vast amount of data stored across various locations, including both structured and unstructured data, which makes it difficult to track, manage, and safeguard. It is also imperative that organizations consider regulatory compliance, which governs the collection, storage, use, and discarding of data. The observation of HIPAA, Health Information Technology for Economic and Clinical Health Act (HITECH), General Data Protection Regulation (GDPR), and other similar regulations would be stringent in the rapidly evolving infrastructure and across devices accessed from various locations, especially with a growing footprint in the Cloud for Healthcare. Given the scale of data, locations, and regulations, many healthcare organizations are adopting AI to drive DSPM for improved insights, productivity, and economies of scale.
As healthcare organizations accelerate their adoption of generative technologies, achieving visibility into ‘Shadow AI’ has emerged as a critical imperative for modern DSPM. Unlike traditional Shadow IT, which typically involves static software installation, Shadow AI introduces dynamic risks where sensitive PHI is processed, reshaped, and potentially retained by autonomous external models without a Business Associate Agreement (BAA)3. AI-enhanced DSPM solutions are evolving to close this visibility gap by moving beyond simple storage scanning to analyze real-time data flows and user intent. By correlating network activity with content-aware prompt inspection, these systems can detect when proprietary clinical data is being fed into unsanctioned Large Language Models (LLMs), allowing security teams to enforce governance policies that distinguish between safe innovation and regulatory non-compliance. The use of unsanctioned generative AI tools creates new visibility and compliance gaps, and AI-driven DSPM is starting to monitor these data flows, not just data at rest.
Figure 4: Source, Rubrik4
-
Data discovery
The first step to achieving cloud data governance is to obtain visibility. This requires a centralized application that automatically and continuously discovers all data across your entire multi-cloud environment. This includes data in managed and unmanaged assets, data embedded in virtual instances, shadow data, data caches, data pipelines, and big data. -
Data classification and cataloging
Next, define the type of data discovered so that it can be classified and cataloged appropriately. For example, sensitive data such as PII, PHI, and PCI should be identified and classified accordingly. This information is used to build a comprehensive, consistent data catalog across clouds. - Policy definition and enforcement
Once you understand what data you have, define and enforce data security policies and remediate issues for:- Compliance and audit management
- Encryption at rest and in motion
- Retention, archiving, and purging
- Who is allowed to access what data
-
Data ownership and usage
Strive to associate all data with its owner. Continuously monitor who uses your data and where it is being sent, especially to third parties. Empower data consumers with self-service access, while retaining control and governance over data. Understand how the data is processed to ensure it can be used appropriately. Keeping the owner updated on the date is also essential. Sometimes “owners” can leave, and the new owner must be identified via proper offboarding processes. - Continuous monitoring
Continuously monitor for policy violations and anomalous behavior to mitigate security risks proactively. Address policy violations, block unauthorized access, and delete unused assets promptly to maintain security and integrity.
Repeat. Since cloud environments are highly dynamic, all these steps must be performed continuously.
Data classification standards are foundational to a successful DSPM program. These standards define the confidentiality of company data and serve as the basis for measuring data risk. Properly labeled and tagged data simplifies the DSPM process, as organizations can easily distinguish between sensitive and non-sensitive data, thereby improving overall data security. AI-driven DSPM provides visibility into how users interact with sensitive data by performing data discovery across all data stores and classifying sensitive data using machine learning patterns, tagging sensitive files regardless of how often they are moved or copied. The key here is to identify the locations of sensitive files through discovery, understand the permissions for these locations to propagate the least-privilege principle, provide data owners with visibility into data access, and establish the necessary security policies to govern behavior and data. Finally, allow your DLP solution to identify and act on the classified file, i.e., encrypt, quarantine, or block it.
DSPM enables continuous monitoring of access patterns and user behaviors across the healthcare data landscape, correlating and decoding weak signals or attacks underway. This proactive monitoring allows incident response teams to quickly identify and mitigate emerging incidents before they can escalate into full-blown attacks. Ingesting DSPM policy logs into a Security Information and Event Management (SIEM) can be another way to create an alerting mechanism, providing comprehensive awareness of attack vectors and identifying where permissions may have been compromised. This can be further integrated with your IT Services Management (ITSM) solution to track and escalate in real time.
“As data sprawls across SaaS, PaaS, IaaS, on-premises, and hybrid environments, organizations face growing visibility gaps and mounting challenges in securing sensitive information. Oversharing, excessive privileges, and abandoned data, combined with unchecked user and machine access, increase the risk of data breaches and compliance violations. As fragmented security tools and growing AI initiatives overwhelm security teams, the financial and reputational impact of data breaches continues to rise.”5
DSPM Functions & AI-Advantages
| Function | AI-Driven Advantage |
|---|---|
| Data Discovery & Classification | Auto-discovery of PHI in structured and unstructured formats, auto-classifying sensitive information using NLP and ML. |
| Risk Prioritization | ML models evaluate data exposure risk by analyzing context (access patterns, location, sensitivity) to prioritize remediation. |
| Behavioral Analytics | AI models detect anomalies in data access patterns and alert to potential insider threats or compromised accounts. |
| Policy Automation | AI suggests or enforces security policies dynamically based on observed behavior and data risk posture. |
| Data Mapping | AI can create near real-time maps of data flows and dependencies across environments. |
| Compliance Reporting | Real-time mapping of data to HIPAA safeguards, automated compliance reporting |
Table 2: DSPM Capabilities and AI-Driven Advantages for Healthcare Data Protection”
Challenges
Despite the benefits, AI challenges in healthcare remain. AI models are only as good as the data on which they are trained, and if the training data is biased or limited, it can lead to unfair or restricted outcomes. Besides ethical and legal considerations, there are integration complexities and regulatory compliance scenarios that require careful consideration. AI systems must be designed to comply with stringent data protection laws, such as GDPR or HIPAA, to ensure that patient information is kept secure and confidential. The issue of accountability is another pressing concern. As AI becomes more autonomous, it can produce incorrect results based on historical patterns, leading healthcare organizations to be deemed liable and subject to complex legal accountability. The integration of AI with existing technologies may pose compatibility issues. Here are some tested scenarios that may be of concern:
- Ability to monitor all public GenAI apps and automatically discover new AI apps as they appear.
- Ability to monitor GenAI sites using WebSockets to track live data flows and interactions.
- OCR detection extends beyond text to include images and documents containing PHI or PII.
- Mechanism to detect and highlight policy violations, such as PHI or PII exposure.
- Ability to block specific prompts that violate data protection or compliance rules.
- Ability to export prompts and results in a report view for audit or compliance purposes.
- Ability to ingest custom health data criteria, like Exact Data Match (EDM), for policy enforcement.
Recent generative AI deployments introduce new data leakage risks in healthcare. While some vendors now offer Business Associate Agreements (BAAs) for API based and enterprise deployments of large language models, such as OpenAI’s API platform and ChatGPT Enterprise or Edu, many public chat interfaces and non enterprise tiers remain inappropriate for PHI.6 Healthcare organizations must clearly distinguish between HIPAA eligible AI services that are covered by a BAA and consumer grade tools without contractual safeguards, and ensure that PHI is only processed in environments aligned with their regulatory and data protection obligations.
However, even compliant environments face the active threat of adversarial machine learning. Unlike traditional cyberattacks that exploit software vulnerabilities, these attacks manipulate inputs or training data to deceive AI models.7 For example, evasion techniques can slightly alter clinical notes or medical images so that AI-based DLP or anomaly detection fails to recognize sensitive content. Data poisoning attacks can corrupt the integrity of security or diagnostic models by injecting malicious samples into training data, silently shifting decision boundaries over time.8 As healthcare data becomes increasingly valuable, organizations must prepare for these sophisticated methods that target the underlying logic of AI systems, including AI-augmented DLP and DSPM controls.
| Threat Type | How It Works | Healthcare Risk | Potential Mitigation |
|---|---|---|---|
| Text Evasion (Synonyms) | Attackers replace sensitive keywords with synonyms or paraphrases (for example, swapping “carcinoma” for “malignancy” or rephrasing diagnoses). | Sensitive patient notes containing PHI can be exfiltrated because AI-based DLP or NLP filters fail to recognize the altered terminology. | Train NLP models on adversarial examples and known synonym patterns, and use contextual embeddings rather than simple keyword matching. |
| Image Evasion (Noise) | Invisible noise or patterns are added to images, such as X-rays or scanned forms. | A patient record or scan can be intentionally misclassified as a benign image (e.g., a “vacation photo”), allowing it to remain in the network undetected. | Apply image preprocessing and denoising, and regularly test models against adversarial noise patterns during validation and red teaming. |
| Data Poisoning | Malicious or incorrect data is secretly injected into the AI training data. | Security models may be taught to ignore specific insider threat behaviors, or clinical decision support tools may be corrupted to provide unsafe medical advice. | Implement strict data provenance and sanitization, monitor training datasets for statistical anomalies, and separate high-risk data sources from core training pipelines. |
| Prompt Injection | Deceptive text is used to manipulate an AI Agent into ignoring its original instructions and following unauthorized commands. | An insider can trick the AI Agent to provide access to sensitive data that they shouldn’t have access to. An attacker could embed a malicious prompt in a patient’s electronic health record that an AI clinical decision support system uses for context, leading to the AI system ignoring critical information, such as drug allergies, and recommending an inappropriate or even a fatal treatment plan. | Validate and clean all incoming data to filter out any suspicious characters, keywords, or encoding that may signify malicious intent. Monitor and validate the AI Agent’s response before they are displayed to the user or used by other systems. |
| Autonomous AI / AI Agent Misuse | AI models or agents autonomously execute tasks, chain prompts, call tools, or interact with internal and external systems without sufficient human validation or policy enforcement. | Unintended PHI/PII exposure, inaccurate clinical or operational decisions, policy violations, regulatory non-compliance (HIPAA/GDPR), and unclear accountability for agent-driven actions. | Enforce human-in-the-loop controls, agent permission scoping, real-time monitoring of agent actions, prompt/output inspection, audit logging, and policy-based blocking of sensitive data flows. |
Table 3: AI Attack Vectors and Healthcare Risk Mitigation in DLP/DSPM Systems
Conclusion
By strategically incorporating AI into DLP and DSPM strategies, healthcare institutions can enhance data security measures, improve regulatory compliance, and foster patient confidence within the digital healthcare landscape. As AI advances in healthcare technology, its integration is projected to expand, presenting new solutions and opportunities. Yet the swift pace of technological change will bring about notable ethical and legal dilemmas. Addressing these challenges will require flexible policies, ongoing ethical assessments, and robust legal frameworks to ensure that AI is leveraged in a manner that prioritizes patient safety, fairness, and transparency. To help organizations justify their investment in AI, measurable Key Performance Indicators (KPIs) can be added to demonstrate AI’s advantages. These key metrics can include:
-
Reduction in False Positives (DLP alerts), which quantifies the decrease in analyst fatigue.
-
Mean Time to Remediation (MTTR) for excessive access rights (DSPM), which demonstrates faster risk reduction.
-
Data Coverage Increase (percentage of Protected Health Information (PHI) discovered and monitored), which proves the scale of AI’s impact. To ensure responsible implementation of AI-driven DLP and DSPM, healthcare organizations must align with regulatory frameworks, such as HIPAA, HITECH, GDPR, and HITRUST, while establishing robust security policies—including data handling, classification, retention, secure coding, and incident response policies. These compliance controls are foundational for trustworthy AI-enabled data governance.
Future Outlook
The convergence of Artificial Intelligence (AI), Data Loss Prevention (DLP), and Data Security Posture Management (DSPM) remains in an early stage of maturity, yet its trajectory is clear. Over the next several years, this integration is expected to evolve beyond rule-based automation and reactive anomaly detection toward self-governing data protection ecosystems capable of enforcing compliance in real-time [6].
Autonomous Policy Orchestration — AI models will progress from augmenting policy enforcement to autonomously managing it [6]. Data handling rules will adapt dynamically to changing risk conditions, user context, and data sensitivity, eliminating the static policy dependencies that currently limit the effectiveness of traditional DLP and DSPM tools.
Federated and Privacy-Preserving Learning — AI models will increasingly be trained across decentralized datasets using privacy-preserving methods such as federated learning and differential privacy, enabling multi-institutional collaboration without direct data sharing [1][2]. This approach enables healthcare institutions to improve detection accuracy while ensuring compliance with privacy requirements for Protected Health Information (PHI).
Integration with Zero Trust and Continuous Access Evaluation — DSPM and DLP telemetry will merge into Zero Trust decision frameworks based on the principles of Verify Explicitly, Enforce Least Privilege, and Assume Breach [3][4]. These integrations will feed risk and posture data into adaptive access control engines in near real-time, making data protection continuous, identity-aware, and dynamically responsive across distributed healthcare environments.
Explainable and Auditable AI Controls — As AI assumes a greater enforcement role, transparency and accountability will become mandatory. Systems will be required to document decision logic and provide auditable evidence trails for regulators and compliance teams, aligning with the transparency and accountability principles established by HIPAA, GDPR, and the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) [5].
Cross-Domain Data Security — Future architectures will extend protection beyond organizational boundaries to encompass the entire healthcare ecosystem—including providers, insurers, researchers, and device manufacturers [8]. AI-driven data fabric architectures leveraging federated learning and homomorphic encryption will correlate posture and threat intelligence across domains, enabling coordinated defense and shared assurance of compliance [8].
Ethical Governance Frameworks — As AI assumes enforcement authority, healthcare organizations will need to establish governance structures that continuously assess the fairness, bias, and patient impact of algorithms [7]. This includes routine audits of AI-generated decisions, transparent accountability chains, and mechanisms for human oversight when automated systems make high-stakes determinations.
The long-term direction is toward continuous, intelligent assurance. In this model, AI not only detects and responds to risks but also continuously interprets context, predicts exposure, and enforces governance throughout the data lifecycle [6]. In this vision, AI becomes an operational layer of trust, enabling healthcare innovation to advance without compromising privacy, safety, or regulatory integrity.
Yet technology alone cannot ensure responsible implementation. The realization of this vision will depend equally on regulatory alignment, workforce readiness, and a sustained commitment to patient-centered values [7]. Only through this balance can AI truly reinforce trust as a foundational element of digital healthcare.
Glossary
CSA Glossary (main/primary)
CSA Data Security Glossary
References
[1] IBM. (2022, June 3). AI and automation for cybersecurity — Leading AI Adopters are uniting technology and talent to boost visibility and productivity across security operations (IBM Institute for Business Value). https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/ai-cybersecurity IBM
[source] Cloud Security Alliance, & Astrix Security. (2024, September 11). The State of Non-Human Identity Security [Survey Report]. https://cloudsecurityalliance.org/artifacts/state-of-non-human-identity-security-survey-report
[source] Zscaler. (2024, April 16). How the Zscaler SaaS Security and Data Discovery Reports Are Healthcare’s Superheroes. https://www.zscaler.com/blogs/product-insights/how-zscaler-saas-security-and-data-discovery-reports-are-healthcare-sZscaler
[source] Proofpoint, Inc. (n.d.). Data loss prevention [Product page]. Proofpoint. https://www.proofpoint.com/us/products/data-loss-prevention
[2] Docus Research Team. (2025, March 4). AI in healthcare statistics 2025: Overview of trends [Blog post]. Docus. https://docus.ai/blog/ai-healthcare-statistics
Royal Philips. (2024). Future Health Index 2024: Better care for more people. https://www.philips.com/c-dam/assets/corporate/global/future-health-index/report-pages/experience-transformation/2024/us/philips-future-health-index-2024-report-better-care-for-more-people-usa.pdf Philips+1
Deloitte. (2024, March 17). Navigating the emergence of generative AI in healthcare: Catalyzing trust in the broader Future of Health transformation. Deloitte Centre for Health Solutions. https://www2.deloitte.com/us/en/pages/life-sciences-and-health-care/articles/generative-ai-in-healthcare.html Deloitte+1
Accenture. (2023). A New Era of Generative AI for Everyone: The Technology Underpinning ChatGPT Will Transform Work and Reinvent Business [White Paper]. https://www.accenture.com/content/dam/accenture/final/accenture-com/document/Accenture-A-New-Era-of-Generative-AI-for-Everyone.pdf
[3] Proofpoint, Inc. (2023). U.S. Healthcare Network protects email and cloud apps with Proofpoint [Case study]. Proofpoint. https://www.proofpoint.com/us/customer-stories/healthcare-network-protects-email-and-cloud-apps
[4] Rubrik, Inc. (2022, November 30). 5 Steps to Cloud Data Governance [Blog post]. Rubrik. https://www.rubrik.com/blog/technology/22/11/5-steps-to-effective-cloud-data-governance
[5] Proofpoint. (2025, June 25). Bridging the Data Security Gap with DSPM. Webinar. https://www.proofpoint.com/us/resources/webinars/bridging-data-security-gap-dspm
Future Outlook
[1] Rieke, N., Hancox, J., Li, W., Milletari, F., Roth, H. R., Albarqouni, S., Bakas, S., Galtier, M. N., Landman, B. A., Maier-Hein, K., Ourselin, S., Sheller, M., Summers, R. M., Trask, A., Xu, D., Yang, D., Cardoso, M. J., & Collins, G. S. (2020). The future of digital health with federated learning. npj Digital Medicine, 3, 119.
https://doi.org/10.1038/s41746-020-00323-1
[2] Abbas, S. R., Abbas, Z., Zahir, A., & Lee, S. W. (2024). Federated Learning in Smart Healthcare. Healthcare (MDPI), 12(24), 2587. https://www.mdpi.com/2227-9032/12/24/2587
[3] Alotaibi, B., & Aldossary, M. (2024). The significance of artificial intelligence in zero trust technologies: A comprehensive review. Journal of Electrical Systems and Information Technology (SpringerOpen). https://jesit.springeropen.com/articles/10.1186/s43067-024-00155-z
[4] Edo, O. C., Ang, D., Billakota, P., et al. (2024). A Zero Trust Architecture for Health Information Systems. Health and Technology. https://www.researchgate.net/publication/376831158_A_Zero_Trust_Architecture_for_Health_Information_Systems
[5] European Commission. (2024). Regulation (EU) 2024/1689: The Artificial Intelligence Act — A Risk-Based Framework for Trustworthy AI. Official Journal of the European Union.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
[6] Mahapatra, B., & Goel, A. (2024). AI-Driven Autonomous Cybersecurity Systems: Advanced Threat Detection, Defense Capabilities, and Future Innovations. ResearchGate Preprint.
https://www.researchgate.net/publication/386013628_AI-Driven_Autonomous_Cyber-Security_Systems_Advanced_Threat_Detection_Defense_Capabilities_and_Future_Innovations
[7] Cloud Security Alliance (CSA). (2024). AI Safety Initiative: AI Governance and Risk Management. CSA Research Publication.
https://cloudsecurityalliance.org/research/ai-safety-initiative
[8] Zhang, Y., & Patel, N. (2024). An Advanced Data Fabric Architecture Leveraging Homomorphic Encryption and Federated Learning. arXiv Preprint arXiv:2402.09795.
https://arxiv.org/abs/2402.09795




