
I am continuing my series on the ICO guidance on AI and data protection. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/
So, what does this mean for you when putting your GDPR into place at your practice? Well, there is a lot to consider here, so I will break it up into more than one blog. This month I will cover evaluating potential harms.
Let’s start with the description of what you need to do, given by the ICO:
“When considering the impact your processing has on individuals, it is important to consider both allocative harms and representational harms:
Allocative harms are the result of a decision to allocate goods and opportunities among a group. The impact of allocative decisions may be loss of financial opportunity, loss of livelihood, loss of freedom, or in extreme circumstances, loss of life.
Representational harms occur when systems reinforce the subordination of groups along identity lines. For example, through stereotyping, under-representation, or denigration, meaning belittling or undermining their human dignity.”

Bias
You may already know that most of the general-purpose AI applications such as ChatGPT have been taught using publicly available Internet content1. There has been a lot of concern that this has resulted in bias. So, let’s explore these more – firstly, the Internet is used far more in rich, developed countries than poor, undeveloped countries. Secondly, the data that is published on the Internet can be biased because a lot of what is posted on the Internet can be biased.
An example is medication for women. Historically women were often excluded from medical trials 2. Recent research has shown that women should often be given different doses of medicine to men 3. As this research is only recent, could you be sure any drug research you did via AI accounted for this bias?
So, if you used AI to help you investigate a drug for a patient, could you be in danger of a representational harm? Why? Well, because women are under represented in many drug trials?
Also, AI companies are actively seeking to monetize their products. Google is exploring integrating ads into its AI overviews and Microsoft research found that ‘purchasing behaviours increased by 53% within 30 minutes of a Copilot interaction’ 4. Is it possible that all this monetorisation of AI could this result in a biased answer to an AI query?
So, we now have a better understanding of evaluating risks and harms. Next month I will explore processes.
Glen Mansbridge
August 2025
- https://help.openai.com/en/articles/7842364-how-chatgpt-and-…
- https://www.theguardian.com/society/2025/may/07/concerning-l…
- https://www.drugtopics.com/view/new-study-shows-sex-biases-i…
- https://www.searchenginejournal.com/google-ai-mode-and-the-f..
If you found this article insteresting you might also like GDPR and AI, the first blog of this series.