GPDR & AI: How To Start Your Review

GDPR and AI

In my two previous blogs, I tried to make it easier to understand why the ICO wants clinics to evaluate the impacts on patients of using AI. Hopefully, you now have a better understanding of the potential risks and harms AI can pose for patients and clients.

In this blog, I want to start addressing what this means in practical terms. What do you need to do to be GDPR compliant if you decide to use AI?

Under GDPR, it is not enough to identify risks or acknowledge potential harms. Clinics must also be able to demonstrate that they have put in place appropriate measures to manage those risks*. So, how do you do this?

Let’s start with some questions.

  1. Do you know what AI systems you are using? (Sounds an obvious question, but your staff may be using AI that you are not aware of, and many 3rd party applications such as Sage for example, now include AI).
  2. What personal data do they process?

Records of Processing Activities is a great starting point to answering these questions. RoPA should enable you to expand on the reasons you are using AI, including:

Purpose of Processing

Why is the data being processed by AI (eg. marketing, treatment, admin)?

Categories of Personal Data

What types of personal data will be collected (eg. names, gender, health info)?

Data Recipients

Who will the data be shared with internally and externally (practitioners, the AI company, third parties)? Consider here that AI often uses the data it processes to learn and then applies what it has learnt to answering and processing data generally – possibly outside your organisation.

Data Transfers

Will the data go outside the EU/UK? A lot of the AI companies are based outside the EU and UK, including America and China. So, this is an important question.

Retention Periods

How long is the data kept?

Security Measures

Technical and organisational safeguards.

Some of the above may be easy to answer, others more challenging. Many AI companies do not want to share details about their processing, retention periods and technical safeguards. This is because AI is a very competitive business and very fast moving, so they don’t want their competitors to know anything about what they do or how they do it.

Obviously, the more confidential the data, the more consideration that needs to be given to the above. If you were entering just a patient gender and age to generate a report on your clinic demographics, that could be argued to have less potential harm and risk to a patient than if you were entering patient’s name, age, sex, complaint and treatment to generate outcome reports for analysis.

Glen Mansbridge
February 2026


*Companies with fewer than 250 employees are exempt, unless the processing is

  • not occasional
  • likely to result in a risk to rights/freedoms or
  • involves special categories of data (sensitive data).

Patient health data is considered to be a special category of data. Also, there is a broad consensus that using AI can result in a risk to rights and freedoms in certain circumstances. So, most practices providing healthcare services will need to comply.