Data privacy and AI: What are the implications

By Business & Finance
08 January 2024

As we move towards the mid-point of the 2020s, it is clear that artificial intelligence (AI) will be one of the defining technologies of the decade. Steven Roberts, group head of marketing at Griffith College, explores the data privacy implications of AI. 


The past year has been notable for the rapid increase in the use and adoption of AI by Irish businesses. This has been driven in part by the hype and publicity associated with ChatGPT, one of the few AI platforms to have achieved general awareness within the broader public. These technologies present significant efficiency and productivity benefits for companies and their staff. However, they also carry risks for the processing of personal data. In this short article, we will consider some of the key aspects of data privacy firms need to keep top of mind as they seek to introduce AI into their businesses.

Implement Data Protection by Design and Default

Companies are rightly enthused by the potential of AI, but how to ensure the use of this new technology is done in a compliant manner? One of the key ways is by implementing the principle of data protection by design and default. This is a core pillar of the GDPR, as outlined in Article 25. It requires companies to consider data protection issues, including default settings, from the outset of a project. A Data Protection Impact Assessment (DPIA) is an important mechanism for achieving such compliance. Through a DPIA, a firm can assess the potential risks of a proposed project as well as identifying mitigating actions that can be undertaken to offset them. In some cases, the risks identified may be of sufficient scale that the project cannot go ahead in its planned guise, and other less invasive approaches may be required to achieve the same results. 

AI poses particular data privacy risks

AI poses particular risks given the nature of the technology. The complex manner in which processing takes place makes it difficult to provide transparent information to users of these platforms. A DPIA can assist with this process. The auditing and mapping of data flows and their underpinning algorithms that is necessary as part of a DPIA will provide clarity as to how data is being obtained, processed and retained. Firms should pay particular attention to the language used to describe such processing, which needs to be clear, simple and easily understood. It should be transparent how decisions or conclusions are reached by the technology. Companies need to be mindful of the additional compliance requirements when it comes to the data of minors, and also a range of special category data identified under GDPR.

Automated Decisions that have legal effect

Closely aligned to the need for transparency is the requirement that consumers are clear regarding any processing that will have a legal effect. Article 22 of GDPR provides data subjects with the right not to be subject to a decision based solely on such processing. Examples include ‘automatic refusal of an online credit application or e-recruiting practices without any human intervention’. In these instances, the person has the right to obtain human intervention as part of the decision-making process. Companies trialling new technologies in these areas should be mindful of the potential for non-compliance if such considerations are not taken into account at the outset. 

Minimise the amount of data being processed

Data minimisation is another fundamental principle. In simple terms, firms should only obtain the minimum amount of data required for a specific processing operation. If your company is considering using AI technologies that process personal data, it is important to adhere to this principle. Closely aligned is the need for clear retention policies, outlining how long any data will be held. Scope creep can occur if policies are not in place and widely understood across the organisation. 

Ongoing training and awareness programmes

Research indicates that up to 90% of data breaches are the result of human error. Companies are therefore only as compliant as their least trained member of staff. For 2024, firms must ensure that their GDPR training plans are up to date and reflect the particular risks inherent in the use of AI. In addition, a balance must be achieved between the incentives put in place to adopt these technologies and the informed training needed to ensure such adoption is done in a compliant manner. 

A complex governance ecosystem

Awareness generated by GDPR has prompted the introduction of similar legislation internationally, with recent laws in countries such as China, Singapore and South Africa. This has contributed to a more complex international data privacy ecosystem. Alongside this, companies should be mindful of the proposed introduction by the EU of a new AI Act and its potential to impact on the use of this technology. A provisional deal on artificial intelligence rules was agreed by lawmakers in December. The EU is adopting a risk-based approach, with significant fines for non-compliance. Irish firms must continue to monitor any further developments and assess the potential impact.

Conclusion

AI technologies are already having a significant effect on Irish businesses. Companies incorporating AI into their strategies must do so in a way that is mindful of data privacy requirements. Data protection impact assessments, transparent communication, clear policies and regular staff training are important pillars within an effective data protection culture, and can be built upon in an iterative manner over time. Irish businesses should also be cognisant of increasing global complexity, and the potential impact of the EU’s proposed AI Act. Companies that ensure the safeguarding of personal data will be well placed to access the benefits of AI.

Steven Roberts is group head of marketing at Griffith College and a member of its management board. He is a Certified Data Protection Officer, author of Data Protection for Marketers: A Practical Guide, and Vice-Chair of the Compliance Institute’s Data Protection & Information Security Working Group.