‘No fear or favour’: Irish data regulator defends relationship with big tech

The media regulator said it would be helpful to ban companies from making technologies that can create sexualised images of children available.
‘No fear or favour’: Irish data regulator defends relationship with big tech

By Gráinne Ní Aodha, Press Association

The data and privacy regulator has defended how it holds big tech to account, arguing that it has levied billions of euros in fines against multinational companies.

The head of the Data Protection Commission told politicians that there was “no fear or favour” in how they apply the law.

Coimisiún na Meán, also said it was not “inherently illegal” for companies to provide people with access to an AI tool to create child sexual abuse images, even though the creation of such images was an offence.

Paschal Donohoe resignation
Paul Murphy, People Before Profit-Solidarity TD, speaking to the media at Leinster House in Dublin. Photo: Damien Eagers/PA.

The Data Protection Commission (DPC) and Coimisiún na Meán were before the committee on Tuesday in relation to deepfake images and the recent controversy by artificial intelligence (AI) tool Grok.

The social media site X, which was known as Twitter before it was bought by tech billionaire Elon Musk, has come under criticism over Grok, which has been accused of generating sexualised images, including of children.

The controversy highlighted a possible loophole in relation to regulations in Ireland around non-consensual images.

Senior Government figures have insisted that there is sufficient legislation to investigate and prosecute child sexual abuse material and non-consensual sexualised images of adults that have been generated through AI tools online.

However, a senior garda said that intimate abuse imagery of adults needs to be shared to constitute an offence and a complainant is required to prompt an investigation.

The controversy also raised questions over whether children’s access to social media should be restricted by government.

I can certainly say that my predecessor and current commissioners were very, very focused on regulating everyone, be it public or private sector, there's no fear of favour, a guarantee of that, and we will apply the law as we apply the law
Paul Murphy TD

The European Commission is, with the assistance of Coimisiun na Mean, conducting an investigation into X’s compliance with its obligations after the Grok controversy.

The DPC announced last week that it was investigating X over non-consensual, intimate or sexualised images being allegedly created through generative AI involving the personal data of EU citizens, including children.

Appearing before the committee on Tuesday, chairman of the DPC Des Hogan said the transformative potential of AI will only be realised if its “substantive risks and potential harms” are addressed.

Asked by People Before Profit TD Paul Murphy if the DPC was “too close to big tech”, he said: “I think we’ve over four billion levied fines at the moment.

“I can certainly say that my predecessor and current commissioners were very, very focused on regulating everyone, be it public or private sector, there’s no fear of favour, a guarantee of that, and we will apply the law as we apply the law.

“We work very closely with our peer regulators, we listen to their views, we listen to civil society organisations.”

X’s AI chatbot Grok
The AI app Grok on the App Store on an iPhone. Photo: Yui Mok/PA.

He said that one of the challenges they face is the legal challenge to the decisions they make against big tech.

He said all fines levied against large platforms were being challenged, bar two, and in the public sector, all fines had been accepted, bar one in relation to the €550,000 fine for the Department of Social Protection over the Public Services Card, which is due before the High Court next month.

“We are not only being litigated under statutory appeal, which is provided for the Data Protection Act, we’re concurrently being judicially reviewed in each of those decisions, and that is a difficulty.”

He added: “I would say that we’re up against very well-resourced legal teams, if I could put it like that.

“By and large, we feel that they’re defendable, they were taken over a number of years.

“We get criticised for taking inquiry decisions over a number of years. We do that because we’re very careful, and we give fair procedural rights to the parties.”

I think there's certainly risks around the way people interact with generative AI that could potentially be addressed by broadening the categories of high risk systems to include a wider range of chatbots and generative AI tool
Jeremy Godfrey, Coimisiun na Mean

Executive chairman of Coimisiun na Mean Jeremy Godfrey said the creation of child sex abuse material is illegal under Irish law and social media platforms “must remove it when reported”.

He said that because the non-consensual sharing of intimate images is a criminal offence in Ireland under Coco’s Law, there was consequential obligations on platforms to remove that material.

But he said it was not “inherently unlawful” to deploy an AI system capable of creating child sex abuse material and said that action to prohibit this could be useful.

“It’s unlawful in the Irish law to produce the imagery, but it’s the deployment of the tool that would be prohibited under the AI Act.

“So at the moment, it’s not a criminal offence under Irish law to deploy an AI system that can be used in that way, but using it in that way is a criminal offence.

“It’s not on the users of AI, not on the people who are putting prompts in, the AI Act puts obligations on the developers and deployers of AI models and AI systems.

“So it would be another tool so that people weren’t provided with the ability to break the law in such an easy way.”

Asked if there are other areas of high risk in relation to AI, he cited generative AI being used as companions and therapists.

“There are some horror stories of having very severe and damaging effects on people’s mental health as a result of it.

“So I think there’s certainly risks around the way people interact with generative AI that could potentially be addressed by broadening the categories of high-risk systems to include a wider range of chatbots and generative AI tools.”

He added: “We don’t have a very specific proposal about how that might be done.

“It’s something which is in the European Commission’s remit to change, so some review to look at how the list of high-risk systems might be added to reflect some of the risks created by generative AI, we think would be a good idea.”

More in this section

Blanchardstown incident Shooter in 'meticulously planned' murder of man in Dublin jailed for seven years
Motorcyclist (50s) dies after two-vehicle coillision in Co Sligo Motorcyclist (50s) dies after two-vehicle coillision in Co Sligo
Dublin GAA secures approval for major new state-of-the art training facility Dublin GAA secures approval for major new state-of-the art training facility

Sponsored Content

Heads are turning for pharmacy investment property in the heart of buzzing Charleville Heads are turning for pharmacy investment property in the heart of buzzing Charleville
Charity places available for Cork City Marathon Charity places available for Cork City Marathon
Turning risk into reward: Top business risks in 2026 Turning risk into reward: Top business risks in 2026
Contact Us Cookie Policy Privacy Policy Terms and Conditions

© Examiner Echo Group Limited

Add Echolive.ie to your home screen - easy access to Cork news, views, sport and more