Ir al contenido
MenaMena
EU AI Act: What It Means for Psychologists in Portugal

EU AI Act: What It Means for Psychologists in Portugal

EU AI ActregulationAIpsychologistsGDPR

A complete guide to the EU AI Act for psychologists in Portugal: risk classification, human oversight, GDPR, and how to choose compliant AI tools.

EU AI Act: What It Means for Psychologists in Portugal

Artificial intelligence is transforming clinical practice in mental health. From automatic analysis of session notes to emotional pattern detection, AI tools offer extraordinary possibilities for psychologists. However, with this potential comes unprecedented regulatory responsibility. The European Artificial Intelligence Regulation — the EU AI Act — is the world's first comprehensive legal framework for AI and has direct implications for all mental health professionals who use or plan to use artificial intelligence-based technologies.

In this comprehensive guide, we explain everything you need to know about the EU AI Act, how it interacts with the GDPR, and what concrete steps you should take to ensure your clinical practice is compliant.


1. What Is the EU AI Act and When Does It Come Into Force

Regulation (EU) 2024/1689 — commonly known as the EU AI Act — was formally adopted by the European Parliament in March 2024 and published in the Official Journal of the European Union in July 2024. It is the world's first comprehensive regulation dedicated exclusively to artificial intelligence, establishing clear rules on the development, marketing, and use of AI systems throughout the European Union.

Implementation Timeline

The EU AI Act follows a phased implementation timeline:

DateMilestone
August 2024Regulation enters into force
February 2025Prohibition of unacceptable AI practices (Title II)
August 2025Rules for general-purpose AI models (GPAI)
August 2026Obligations for high-risk AI systems (Annex III)
August 2027Obligations for high-risk AI systems (Annex I)

For psychologists in Portugal, the most relevant milestone is August 2026, when obligations for AI systems classified as high-risk come into effect — a category that includes many tools used in mental health.

Objectives of the Regulation

The EU AI Act aims to:

  • Protect fundamental rights: Ensure that AI does not compromise the dignity, privacy, and self-determination of European citizens.
  • Establish trust: Create a framework that allows citizens and professionals to trust AI tools.
  • Promote responsible innovation: Balance technological innovation with user protection.
  • Harmonise rules across the EU: Prevent regulatory fragmentation between Member States.

2. Why Psychologists Should Pay Attention

It may seem that the EU AI Act is a concern exclusively for technology companies, but the reality is quite different. This regulation directly affects any professional who uses AI systems in their practice — not just those who develop them.

The Concept of "Deployer"

The EU AI Act introduces the concept of "deployer" — any natural or legal person who uses an AI system under their authority. When a psychologist uses an AI tool to analyse clinical notes, generate reports, or assist in diagnosis, they are considered a deployer and have specific obligations.

Mental Health Data Is Especially Sensitive

According to a study by the EU Agency for Fundamental Rights (FRA), 67% of European citizens consider mental health data the most sensitive category of personal information. This perception is reflected in the regulatory framework: mental health data benefits from enhanced protection under both the GDPR and the EU AI Act.

Specific Risks in Mental Health

  • Algorithmic bias: AI systems trained on non-representative populations may produce biased results, disproportionately affecting certain demographic groups.
  • Over-reliance: The risk of automation bias — the tendency to uncritically accept AI suggestions — is particularly dangerous in clinical contexts.
  • Impact on clinical decisions: An incorrect AI recommendation can influence diagnoses, treatment plans, and ultimately patient health.
  • Loss of patient autonomy: Using AI without adequate transparency can compromise informed consent and the therapeutic relationship.

The Market Is Growing Rapidly

The global AI in mental health market was valued at $1.2 billion in 2023, with projected growth to $4.8 billion by 2028 (CAGR of 32%). In Portugal, more than 40% of clinical psychologists report already using some form of AI-assisted technology, according to data from the Portuguese Order of Psychologists (Ordem dos Psicologos Portugueses). This rapid adoption makes understanding the regulatory framework even more urgent.


3. Risk Classification for AI Systems in Mental Health

The EU AI Act adopts a risk-based approach, classifying AI systems into four categories:

Unacceptable Risk (Prohibited)

Practices completely prohibited since February 2025:

  • Social scoring systems
  • Subliminal manipulation that causes harm
  • Exploitation of vulnerabilities of specific groups
  • Real-time biometric identification in public spaces (with exceptions)

Relevance for psychologists: Any AI tool that subliminally manipulates the behaviour of vulnerable patients (for example, people with severe mental disorders) is strictly prohibited.

High Risk

This is the most relevant category for mental health. According to Annex III of the regulation, AI systems used in the following areas are considered high-risk:

  • Access to essential services: Including healthcare.
  • Medical devices: AI systems integrated into or functioning as medical devices.
  • Employment and worker management: Relevant for mental health organisations.

Examples in clinical practice:

  • AI tools that assist in diagnosing mental disorders
  • Systems that recommend treatment plans
  • Triage algorithms that determine care priority
  • Predictive risk analysis tools (e.g., suicide risk)

Limited Risk

Systems with transparency obligations:

  • Chatbots that interact with patients (must identify themselves as AI)
  • Systems that generate content (must label it as AI-generated)

Examples: Virtual assistants for psychoeducation, initial triage chatbots, automatic report generators.

Minimal Risk

No additional specific obligations:

  • Email spam filters
  • Simple scheduling assistants
  • Basic transcription tools (without clinical analysis)

Obligations for High-Risk Systems

Providers of high-risk AI systems must ensure:

  1. Risk management system: Continuous identification, analysis, and mitigation of risks.
  2. Data governance: Relevant, representative, and high-quality training data.
  3. Technical documentation: Detailed system specifications.
  4. Event logging: Automatic traceability of system operation.
  5. Transparency: Clear instructions for use for deployers.
  6. Human oversight: Mechanisms that allow effective human intervention.
  7. Accuracy, robustness, and cybersecurity: Adequate levels of performance and protection.

4. Article 14: Human Oversight — What Changes in Practice

Article 14 of the EU AI Act is arguably the most relevant for psychologists. It establishes that high-risk AI systems must be designed to allow effective human oversight.

Fundamental Principles

The article establishes that:

  • AI systems must include appropriate interfaces that allow oversight by individuals.
  • Oversight must be proportional to the risks and the level of autonomy of the system.
  • The human must be able to understand the capabilities and limitations of the system.
  • The human must be able to correctly interpret the system's outputs.
  • The human must be able to decide not to use the system in any situation.
  • The human must be able to intervene in the system's operation or stop it.

What This Means in Clinical Practice

AI as an Assistant, Never as a Decision-Maker

The EU AI Act reinforces what good clinical practice already requires: the clinical decision always belongs to the professional. An AI system may suggest diagnostic hypotheses, identify relevant patterns, or propose interventions, but the final decision always belongs to the psychologist.

This means that AI tools that make autonomous clinical decisions — without the possibility of human review — are not admissible in European clinical practice.

Right to Override

Psychologists must have the ability to reject or modify any suggestion from the AI system. This goes beyond simply being able to ignore a recommendation — it means the system must actively facilitate disagreement and modification of its outputs.

In practice, the Mena.ai platform implements this principle through a system where AI-generated analyses are presented as editable suggestions. The therapist reviews, modifies, and approves each analysis before it is incorporated into the clinical record. The therapist retains full control.

Competence and Training

Article 14 requires that people responsible for oversight have the necessary competence, training, and authority. For psychologists, this means:

  • Understanding the capabilities and limitations of the AI tool they use
  • Knowing how to critically interpret the results
  • Being alert to possible biases or errors
  • Maintaining professional development regarding the technologies they use

Documenting Oversight

It is advisable to document the human oversight process, including:

  • When and how AI suggestions were reviewed
  • What modifications were made
  • The clinical justification for accepting or rejecting AI recommendations

Feedback System: Article 14 in Action

One of the implicit requirements of Article 14 is that users can report problems and provide feedback on AI system performance. Mena.ai implements a per-message feedback system that allows the therapist to evaluate each AI output with thumbs up/down, contributing to continuous system improvement while simultaneously fulfilling human oversight requirements.


5. GDPR + EU AI Act: Dual Protection for Patient Data

The EU AI Act does not replace the GDPR — it complements it. Psychologists in Portugal must comply with both regulations simultaneously, creating a dual layer of protection for patient data.

Where the Regulations Overlap

AspectGDPREU AI Act
Personal dataComprehensive protectionFocus on use by AI
ConsentRequired for processingRequired + transparency about AI
TransparencyInform about data processingInform about AI use
RightsAccess, rectification, erasureExplainability, contestation
Impact assessmentDPIA for high-risk processingConformity assessment
OversightDPO mandatory in certain casesHuman oversight mandatory

Enhanced Consent

If you already obtain informed consent for data processing (GDPR), you now also need to specifically inform patients about:

  • The use of AI tools in your practice
  • What data is processed by AI
  • How AI influences (or does not influence) clinical decisions
  • Their rights regarding the use of AI

Algorithmic Transparency

The GDPR (Article 22) already grants data subjects the right not to be subject to fully automated decisions with significant effects. The EU AI Act reinforces this right by requiring:

  • Explainability of AI results
  • Documentation of models used
  • Information about training data
  • Contestation mechanisms

Fundamental Rights Impact Assessment (FRIA)

Article 27 of the EU AI Act introduces the obligation to carry out a Fundamental Rights Impact Assessment (FRIA) before implementing high-risk AI systems. This assessment complements the GDPR's DPIA and must include:

  • Description of the AI system and its purpose
  • Impact on patients' fundamental rights
  • Risk mitigation measures
  • Monitoring and oversight plan

Encryption and Security

Both regulations require robust security measures. In practice, this means:

  • Encryption of data at rest and in transit: Mandatory for health data.
  • Field-level encryption: For especially sensitive data such as clinical notes and diagnoses.
  • Granular access control: Different access levels by role.
  • Access auditing: Records of who accessed what data and when.

Mena.ai's clinical management implements field-level encryption with organisation-specific keys, ensuring that even in the event of unauthorised database access, clinical data remains unreadable.


6. How Mena.ai Already Meets These Requirements

The Mena.ai platform was designed from the outset with the principles of privacy by design and compliance by design. Here is how it addresses each regulatory requirement:

Human Oversight (Article 14)

Mena.ai's AI-assisted analysis follows a "human-in-the-loop" model:

  • AI as an assistant: AI analyses transcripts and session notes, suggesting clinical insights, but never makes autonomous decisions.
  • Mandatory review: Each generated analysis must be reviewed and approved by the therapist before being saved.
  • Full editing: The therapist can modify, supplement, or reject any AI suggestion.
  • Integrated feedback: Per-message rating system (thumbs up/down) for continuous improvement.
  • Decision logging: Automatic documentation of when the therapist accepts, modifies, or rejects AI suggestions.

Transparency and Explainability

The dedicated page on how the AI works explains in accessible terms:

  • What language models are used
  • How data is processed
  • What limitations the system has
  • How the therapist maintains control

This transparency is fundamental both for therapists (as deployers) and for patients (as data subjects).

Data Protection (GDPR + EU AI Act)

  • Field-level encryption: Clinical data encrypted with company-specific keys.
  • Data in the EU: All infrastructure is hosted in the European Union.
  • Data minimisation: AI processes only the strictly necessary data.
  • No training on patient data: Patient data is never used to train or improve AI models.
  • Access control: Role-based permissions (RBAC) with multi-factor authentication.

Risk Management

  • Continuous monitoring: Tracking AI performance and outputs.
  • Regular testing: Periodic assessment of bias and accuracy.
  • Contingency plan: Full functionality without AI in case of failure.
  • Regulatory updates: Active monitoring of EU AI Act developments.

Multi-tenancy and Data Isolation

All data is isolated by organisation (company_id), ensuring that:

  • No clinic can access another's data
  • Encryption keys are independent per organisation
  • Data deletion is complete and verifiable

7. Practical Checklist for Psychologists Choosing Compliant AI Tools

Before adopting any AI tool in your practice, use this checklist to assess compliance:

Transparency and Information

  • Does the provider clearly explain how the AI works?
  • Is accessible technical documentation available?
  • Is it clear what data is processed and how?
  • Does the provider identify the system's risk classification?

Human Oversight

  • Can you review all AI suggestions before they are applied?
  • Can you modify or reject AI outputs?
  • Can you disable AI at any time without losing functionality?
  • Is there a feedback mechanism to report problems?
  • Does the system work without AI as a fallback?

Data Protection

  • Is data stored in the EU?
  • Is there adequate encryption (at rest and in transit)?
  • Is patient data not used to train models?
  • Is there an appropriate data processing agreement (DPA)?
  • Does the privacy policy specifically cover AI use?

Security

  • Is multi-factor authentication available?
  • Are access events logged and auditable?
  • Does the provider conduct regular security testing?
  • Is there an incident response plan?

Accuracy and Reliability

  • Does the provider disclose AI performance metrics?
  • Has the system been tested with diverse populations?
  • Are there alerts for low-confidence situations?
  • Has the system been validated in a real clinical context?

Regulatory Compliance

  • Does the provider demonstrate GDPR compliance?
  • Is there an EU AI Act compliance plan?
  • Has the system undergone conformity assessment (if high-risk)?
  • Is a data protection officer identified?

Ethical Aspects

  • Does the system respect professional confidentiality?
  • Are there measures against algorithmic bias?
  • Is AI presented honestly to patients?
  • Does informed consent cover AI use?

8. Frequently Asked Questions

Does the EU AI Act apply to me as an individual psychologist?

Yes. If you use AI tools in your clinical practice, you are considered a "deployer" under the regulation. Your obligations vary depending on the risk classification of the system you use, but at a minimum include effective human oversight and transparency towards patients.

Do I need to stop using AI until August 2026?

Not necessarily. The EU AI Act does not prohibit the use of AI in mental health — it regulates it. The important thing is to ensure that the tools you use meet the applicable requirements. The period until August 2026 is precisely for providers and users to adapt.

What penalties are envisaged?

Penalties can be significant:

  • Up to EUR 35 million or 7% of annual global turnover for violations of prohibitions.
  • Up to EUR 15 million or 3% of turnover for violation of other obligations.
  • Up to EUR 7.5 million or 1.5% of turnover for providing incorrect information.

For SMEs and startups, these amounts are adjusted proportionally.

Is the informed consent I already use sufficient?

Probably not. Traditional informed consent covers clinical treatment and, if updated for the GDPR, the processing of personal data. With the EU AI Act, you need to add specific information about AI use: what tools you use, how they work, what data they process, and how you maintain human oversight.

Can I use ChatGPT or other general-purpose LLMs with patient data?

This is a critical question. Using general-purpose LLMs (such as ChatGPT, Claude, or Gemini) with patient data raises serious concerns:

  • Data may be used for training: Many general-purpose LLMs use input data to improve their models.
  • No specific encryption: Data is processed on shared infrastructure.
  • Lack of clinical compliance: These systems were not designed for clinical use.

The recommendation is to use platforms specifically designed for clinical practice, such as Mena.ai, which processes data securely and never uses it for model training.

Does the Portuguese Order of Psychologists have guidelines on AI?

The Portuguese Order of Psychologists (Ordem dos Psicologos Portugueses) has been following this topic and has issued general recommendations on the use of technology in clinical practice. We recommend regularly consulting the Order's publications for updated guidance. The Order's Code of Ethics already establishes fundamental principles — such as professional competence, informed consent, and confidentiality — that equally apply to the use of AI.

What if my AI provider is not compliant?

As a deployer, you have the responsibility to verify the compliance of the tools you use. If you discover that your provider does not meet EU AI Act requirements, you should:

  1. Suspend use of the system for high-risk purposes.
  2. Contact the provider to demand compliance.
  3. Seek compliant alternatives.
  4. Document the measures taken.

How can I stay updated on the regulation's evolution?

Some recommended sources:

  • European Commission: Official EU AI Act page
  • AI Act Explorer: Interactive tool for navigating the regulation
  • CNPD: Portuguese National Data Protection Commission (Comissao Nacional de Protecao de Dados)
  • Portuguese Order of Psychologists: Professional guidelines
  • Mena.ai Blog: Regular updates on regulation and technology in mental health

Conclusion

The EU AI Act represents a historic milestone in artificial intelligence regulation and has concrete implications for all psychologists in Portugal who use or plan to use AI tools. Far from being an obstacle to innovation, this regulation lays the foundation for responsible and ethical use of AI in mental health — something that benefits both professionals and patients.

The essential points to remember are:

  1. Human oversight is non-negotiable: AI must always be a tool at the service of the psychologist, never a substitute for clinical judgement.
  2. Transparency is mandatory: Patients must be informed about how AI is used in your practice.
  3. The GDPR and EU AI Act work together: Compliance with one does not exempt you from compliance with the other.
  4. Choosing the right tool is critical: Opting for platforms designed for clinical practice, with built-in regulatory compliance, greatly simplifies meeting your obligations.
  5. The deadline is August 2026: There is time to prepare, but the time to start is now.

By choosing a platform like Mena.ai, designed with the principles of privacy by design, human oversight, and regulatory compliance from its inception, you are taking a decisive step to protect your patients, your practice, and your professional reputation.

Artificial intelligence is an extraordinary tool for mental health. Used responsibly, ethically, and in compliance with regulations, it can transform the quality of care you provide to your patients. The EU AI Act is the framework that allows us to achieve this goal with confidence.

This article is for informational purposes and does not substitute specialised legal advice. For specific questions about your situation, consult a lawyer specialising in AI regulation and data protection.

Compartir