Vai al contenuto
MenaMena
GDPR and Artificial Intelligence in Clinical Practice: A Complete Guide for Psychologists

GDPR and Artificial Intelligence in Clinical Practice: A Complete Guide for Psychologists

GDPRartificial intelligencedata protectioncompliance

A practical guide on how to reconcile the GDPR with the use of artificial intelligence in mental health. Data protection in clinical practice, consent, patient rights, and a compliance checklist.

GDPR and Artificial Intelligence in Clinical Practice: A Complete Guide for Psychologists

Artificial intelligence is revolutionising clinical practice in mental health — from automated analysis of session notes to identifying therapeutic patterns. However, when we talk about psychological data, we are dealing with the most sensitive category of information that exists. The General Data Protection Regulation (GDPR) imposes strict obligations on any professional who processes this data, and the introduction of artificial intelligence tools adds an additional layer of complexity.

In this guide, we explain how the GDPR applies specifically to the use of AI in clinical psychology practice, what precautions you should take, and how to ensure your practice is fully compliant.


The GDPR in Context: The Essentials for Psychologists

The GDPR (Regulation (EU) 2016/679) is the European regulation governing the processing of personal data across the European Union, in force since May 2018. For psychologists, this regulation is particularly relevant for three fundamental reasons:

Mental Health Data Are Special Categories

Article 9 of the GDPR classifies health-related data as "special categories of personal data", subject to enhanced protection. This includes:

  • Psychological diagnoses and assessments
  • Clinical notes and records of therapeutic sessions
  • Treatment plans and clinical progress
  • Results from assessment instruments
  • Information about suicidal ideation, self-harm, or abuse
  • Psychiatric medication history

Fundamental Applicable Principles

The GDPR is built on seven principles that must guide all data processing activities:

  1. Lawfulness, fairness, and transparency: Process data lawfully and transparently for the data subject.
  2. Purpose limitation: Collect data only for specified and legitimate purposes.
  3. Data minimisation: Collect only the data strictly necessary.
  4. Accuracy: Keep data up to date and correct.
  5. Storage limitation: Do not retain data beyond what is necessary.
  6. Integrity and confidentiality: Ensure the security of data.
  7. Accountability: Demonstrate compliance with all principles.

Consequences of Non-Compliance

Fines can reach 20 million euros or 4% of annual global turnover. Beyond financial penalties, a data breach in a mental health context can cause irreparable damage to patient trust and professional reputation.


How Artificial Intelligence Processes Clinical Data

To correctly assess data protection risks, it is essential to understand how AI processes clinical information. Different tools use distinct approaches, and each has different implications for the GDPR.

Language Models in Clinical Practice

The most common AI tools in mental health use language models (LLMs) to:

  • Session transcription and summarisation: Converting audio or notes into structured text.
  • Clinical content analysis: Identifying themes, emotional patterns, and risk factors.
  • Intervention suggestions: Evidence-based proposals for the treatment plan.
  • Report generation: Automating clinical documentation.

Where the Risks Lie

The main risk lies in where the data travels during processing:

  • Cloud vs. local processing: If data is sent to external servers, who has access?
  • Model training: Are your patients' data used to improve the AI? (At Mena.ai, the answer is unequivocally no — patient data is never used to train AI models.)
  • Data retention by the provider: How long does the provider retain processed data?
  • Server location: Do the data remain in the EU or are they transferred to third countries?

Understanding how the AI you use works is the first step towards ensuring compliance.


Consent Requirements for AI Use

Consent is one of the pillars of the GDPR, and the use of AI in clinical practice requires heightened attention to this requirement.

Consent for Health Data Processing

The processing of health data requires, as a rule, the explicit consent of the data subject (Article 9(2)(a)). This consent must be:

  • Freely given: The patient cannot be coerced or penalised for refusing.
  • Specific: It must clearly refer to the purposes of the processing.
  • Informed: The patient must understand what they are consenting to.
  • Unambiguous: There must be a clear affirmative action (signature, checkbox).

Additional Consent for AI

When using AI tools, informed consent must include additional information:

  • What AI tools are used and for what purposes
  • What specific data is processed by the AI
  • Whether the AI has access to recordings, transcriptions, or complete notes
  • How the AI influences (or does not influence) clinical decisions
  • That the therapist always retains the final decision
  • The patient's rights regarding AI processing, including the right to refuse

Right to Refuse

The patient must be able to refuse the use of AI without this affecting the quality of their treatment. This means your practice must function perfectly with or without AI — technology is a complement, not a dependency.


Data Minimisation: The Most Relevant Principle for AI

The data minimisation principle (Article 5(1)(c)) is arguably the most challenging when using AI in clinical practice.

What It Means in Practice

Data minimisation implies that you should collect and process only the data strictly necessary for the intended purpose. In the context of AI, this translates to:

  • Do not send more data than necessary: If the AI only needs a session summary, do not send the full transcript.
  • Anonymise when possible: Remove or pseudonymise identifiers before AI processing.
  • Limit the temporal scope: Process only the data relevant to the analysis in question, not the patient's entire history.
  • Define retention periods: Data processed by the AI should be deleted when no longer needed.

Field-Level Encryption

An effective technical approach is field-level encryption — where each sensitive data element is individually encrypted with organisation-specific keys. This ensures that even in the event of unauthorised access to the database, clinical data remains unreadable. The Mena.ai platform implements this approach for all clinical data.


Data Subject Rights in the Context of AI

The GDPR grants patients a set of rights that must also be respected when using AI.

Right of Access (Article 15)

Patients may request information about all personal data processed, including:

  • What data was processed by AI tools
  • What analyses or profiles were generated
  • With whom the data was shared (including AI providers)

Right to Explanation (Article 22)

The patient has the right not to be subject to decisions based solely on automated processing that produce significant effects. When AI is used in clinical practice:

  • AI analyses must always be reviewed by the therapist before being applied
  • The patient may request an explanation of how the AI contributed to a particular clinical conclusion
  • There must always be human intervention in the decision-making process

Right to Erasure (Article 17)

The patient may request the deletion of their data, including:

  • Data processed or stored by AI tools
  • Analyses or profiles generated by the AI
  • Any output derived from their data

Note that there are legitimate exceptions, such as the obligation to retain clinical records under Portuguese law.

Right to Data Portability (Article 20)

Patients may request their data in a structured, machine-readable format. This includes data generated by the AI that constitutes the patient's personal data.


The EU AI Act: A New Layer of Regulation

Beyond the GDPR, the European Artificial Intelligence Regulation (EU AI Act, Regulation (EU) 2024/1689) adds specific obligations for those who use AI.

Risk Classification

The EU AI Act classifies AI systems by risk level. AI tools used in mental health contexts are typically classified as high risk, which entails:

  • Mandatory human oversight (Article 14): The therapist must be able to review, modify, and reject any AI suggestion.
  • Transparency: Patients must be informed about the use of AI.
  • Risk management: Continuous identification and mitigation of risks.
  • Technical documentation: Recording of how the system works and its limitations.

Implementation Timeline

The obligations for high-risk systems come into force in August 2026. Psychologists who use AI should begin preparing now to ensure timely compliance.

Alignment with the GDPR

The EU AI Act does not replace the GDPR — it complements it. In practice, this means you must comply with both regulations simultaneously. The good news is that many of the requirements overlap: transparency, human oversight, data security, and data subject rights are common to both.


Practical GDPR Compliance Checklist for AI in Clinical Practice

Use this checklist as a guide to assess and improve your practice's compliance:

Consent and Transparency

  • Informed consent updated with reference to AI use
  • Privacy policy explaining AI data processing
  • Clear information to patients about what data the AI processes
  • Ability for the patient to refuse AI use without impact on treatment

Technical Security

  • Data encryption at rest and in transit
  • Data stored on servers within the European Union
  • Multi-factor authentication for platform access
  • Role-based access control
  • Access audit logs

Minimisation and Retention

  • Only strictly necessary data processed by the AI
  • Retention periods defined for AI-processed data
  • Procedure for data deletion upon patient request
  • Patient data not used to train AI models

Oversight and Governance

  • Record of processing activities documented
  • Data Protection Impact Assessment (DPIA) carried out
  • Data Processing Agreement (DPA) with AI providers
  • Security incident response procedure defined
  • Regular staff training on data protection

Data Subject Rights

  • Procedure for responding to access requests (deadline: 30 days)
  • Ability to explain how AI contributed to clinical analyses
  • Procedure for data erasure, including AI-processed data
  • Ability to export data in machine-readable format

How to Choose GDPR-Compliant AI Tools

Not all AI tools are equal when it comes to data protection. Here are the criteria you should evaluate before adopting any technology:

Essential Criteria

  1. Data location: Data must remain in the EU. Transfers to third countries (such as the USA) require additional safeguards.
  2. Model training policy: Confirm that your patients' data is not used to train or improve the AI. This is a critical issue — many general-purpose tools use input data to refine their models.
  3. Encryption: Check whether there is encryption at rest, in transit, and ideally at field level.
  4. Data Processing Agreement (DPA): The provider must offer a DPA compliant with Article 28 of the GDPR.
  5. Human oversight: The tool must present AI outputs as editable suggestions, never as final decisions.

Warning Signs

Avoid tools that:

  • Do not provide a DPA or clear privacy policy
  • Store data outside the EU without adequate safeguards
  • Use user data to train models
  • Do not allow complete data deletion
  • Make autonomous clinical decisions without human oversight
  • Do not offer adequate encryption

General-Purpose vs. Specialised Tools

Using general-purpose LLMs (such as ChatGPT, Gemini, or Claude) with patient data raises serious compliance concerns. These platforms were not designed for clinical use and may not offer the necessary data protection guarantees.

Platforms specialised for clinical practice, such as Mena.ai, are designed from the ground up with privacy-by-design principles and offer specific guarantees: field-level encryption, data exclusively in the EU, prohibition of use for model training, and integrated human oversight.


Frequently Asked Questions

Can I use ChatGPT or other general-purpose LLMs to analyse patient data?

This practice is strongly discouraged. General-purpose LLMs may use input data to train their models, store information on servers outside the EU, and do not offer the security and confidentiality guarantees required by the GDPR for mental health data. If you need AI assistance, use platforms specifically designed for clinical practice that guarantee GDPR compliance.

If my patient consents to session recording, does that automatically cover AI processing?

No. The GDPR requires consent to be specific to each purpose. Consent to record a session does not automatically cover the analysis of that recording by AI. You must obtain separate and specific consent for AI processing, clearly explaining what data will be processed, how, and why.

What responsibility do I have if my AI tool provider suffers a data breach?

As the data controller for your patients' data, you are obligated to ensure that your processors (including AI providers) offer sufficient data protection guarantees. You must have a valid DPA with the provider, carry out due diligence on their security practices, and notify the Portuguese Data Protection Authority (CNPD) and affected patients within 72 hours in the event of a breach that poses a risk to data subjects' rights.

Do I need to carry out a DPIA before implementing an AI tool in my practice?

Yes, it is strongly recommended and, in most cases, mandatory. A Data Protection Impact Assessment is required when processing is likely to result in a high risk to data subjects' rights. Using AI to process mental health data meets several criteria that make a DPIA necessary: processing of sensitive data, use of new technologies, and large-scale processing.


Conclusion

Artificial intelligence represents an extraordinary opportunity to improve clinical practice in psychology — from automating administrative tasks to identifying clinically relevant patterns. However, this opportunity comes with an increased responsibility for protecting patient data.

The GDPR and the EU AI Act are not obstacles to innovation — they are the framework that enables a responsible and ethical use of technology in mental health. By investing in compliance, you are protecting your patients, strengthening the therapeutic alliance, and building a sustainable and trusted practice.

The fundamental steps are clear: understand the legal obligations, choose compliant tools, update informed consent, and maintain an active oversight posture over any technology you use. Platforms like Mena.ai, designed with privacy by design and integrated compliance, simplify this journey and allow you to focus on what truly matters — caring for your patients.

This article is for informational purposes only and does not constitute specialised legal advice. For specific questions about your situation, consult a lawyer specialising in data protection.

Condividi