Artificial Intelligence is one of the most talked about topics in business. Steven Roberts, Group Head of Marketing at Griffith College, notes the detail businesses must know going forward.
The technology presents significant opportunities to increase productivity and efficiency. According to CSO data, more than 15% of all Irish enterprises used AI in 2024; and for larger firms with more than 250 staff the figure was 51%.
Boards and senior management must comply with regulations and codes of practice, whilst still taking on some level of risk in order to deliver on the potential benefits these technologies present. As companies put strategies in place to leverage AI models and systems, it is important to remain cognizant of the significant data privacy implications presented by AI technologies. In this short article, we will look at some of the key considerations for businesses.
Consumer Concerns
Firms must be aware of consumer concern over how their personal data may be used. Many EU citizens are regular users of AI technology; however, given the complex and opaque nature of these systems, they are often unclear as to types of processing taking place and whether their personal information is extractable or permanently held. More broadly, commentators and thought leaders have expressed trepidation about the potential impact on society. Historian and author Yuval Noah Harari, for example, has stated artificial intelligence has ‘hacked the operating system of human civilization’.
Increased Focus from Supervisory Authorities
There is an inherent friction between the principles of data privacy and the processing required to implement large scale AI technologies. In order to effectively train an AI model, vast amounts of data are required; this often includes personal data, which can be put to uses outside the scope for which it was first obtained. This has drawn the attention of EU supervisory authorities.
France’s CNIL fined Clearview AI €20 million in 2022 for unlawful processing of personal data. In 2023, Italian authorities briefly banned the use of ChatGPT, only lifting it after specific changes were implemented. This was followed in 2024 by a fine of €15 million, for Open AI’s use of personal data to train the ChatGPT LLM without properly informing users and in the absence of stringent age verification measures. In Ireland, the Data Protection Commission announced in April of this year it was commencing an inquiry into the processing of personal data within publicly-accessibly posts on the ‘X’ social media platform for the purposes of training the company’s Grok LLM.
The AI Act
The EU’s AI Act came into force on 1st August 2024. Space does not permit a detailed analysis of this legislation; however, in summary it is intended to provide rules and safeguards around the use of AI technologies within the EU. Despite some confusion as to whether it is a ‘new GDPR’, the AI Act is primarily product safety legislation. It takes a risk-based approach depending on the type of system under consideration and applies to companies that develop, use, distribute or import AI systems in the European Union. When it comes to the processing of personal data, whether via AI technologies or other means, the GDPR continues to apply.
Practical Steps
Taking account of consumer concerns, regulatory attention, and the risks inherent in AI technology, what steps can Irish companies take to ensure they remain GDPR compliant? In such a nascent, fast-moving sector, any checklist can only ever be indicative. However, businesses using or considering the adoption of AI systems should take account of the following:
- Consider data protection from the outset of the project. A Data Protection Impact Assessment can be a very useful tool to assess risks and identify possible mitigating actions.
- Identify a clear legal basis for any processing of personal data and ensure adherence to the GDPR’s data minimisation principle.
- Identify whether automated decision-making with legal effect may take place.
- Undertake regular data audits of the processing that is taking place.
- Assess whether the technology will involve transferring personal data to third countries.
- Put in place clear processes, policies and procedures around the use of AI within the business; this includes appropriate safety and security measures.
- Review privacy notices and public facing information to ensure they are up to date, transparent and readily understandable to the general public.
- Provide training to staff who will be using the technology, with regular refresher sessions.
- Continue to learn and iterate; perfection will not be achieved on the first draft.
Given the pace of change, it is recommended companies continue to monitor advice and guidance from the Data Protection Commission and the European Data Protection Board, particularly regarding any areas of overlap between the GDPR and the new AI Act. Legal and compliance teams will also be an important source of counsel within the organisation.
Steven Roberts is a Chartered Director, Certified Data Protection Officer and Fellow of the Chartered Institute of Marketing. He is Vice Chair of the Compliance Institute’s Data Protection & Information Security Working Group and Group Head of Marketing at Griffith College. His forthcoming book on Data Protection for Business is due for publication by Clarus Press in 2026.