Pictured (L-R): Irakli Beridze and David Hickton at Dublin Tech Summit panel
This panel at Dublin Tech Summit 2025, hosted on the Main Stage on 28 May, discussed AI weaponisation, global power dynamics, real-world AI failures, ethical dilemmas, and the urgent need for regulation and resilience.
By Rye Baker
As artificial intelligence becomes more deeply embedded in society and warfare alike, world leaders and experts are urgently confronting the possibility that AI could spiral beyond human control. During this panel, many speakers came together to discuss the implications of AI in the wrong hands—and the systemic vulnerabilities that have already emerged.
Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics, opened with a stark reminder of how long AI’s criminal potential has been on the radar. “In 2017, together with Interpol, we created meetings to better understand how organised crime, terrorists, criminals could use AI for crime … we have overwhelming examples of how AI is being used for crime and the potential for same.”
Professor David Hickton, a cybersecurity and disinformation expert, expanded on the consequences of unchecked AI use in global conflict. Referencing ongoing wars, he said: “The conflicts in Ukraine and Gaza have served to punctuate that we don’t have rules or norms in this area … When you talk about the war theatre, there are no rules on use of tech and use of it in war crimes. The world will need to come together and develop a framework.” His comment highlights the dangerous regulatory vacuum surrounding AI in warfare, where the line between innovation and war crime has been blurred.
From a business and investment standpoint, the ethical perspective is also shifting. Sille Pettai, CEO of SmartCap, noted a turning point in how investors approach defense-related AI technologies. “We, as investors, have stayed away from investing in defence tech for ethical reasons for years. But that needs to change now. If your adversary is deadly and lethal, as we see in Ukraine, then you need to protect yourself.”
On the ground in Ukraine, real-time AI response has already proved both vital and cautionary. Andriy Kusyy, CEO and co-founder of LetsData, described how his company developed tools to track digital disinformation efforts. “We started building this product just after the full scale invasion of Ukraine. We detected that Russia had a network of telegraph channels across the ground. Disinformation crosses borders now and you don’t need local talent to operate it.” This example illustrates how AI enables new forms of influence operations that are borderless, fast-moving, and often anonymous.
Despite the risks, the consensus among panelists was not to fear AI but to regulate and shape it proactively. “I might be the only American who likes regulation,” Professor David Hickton said, “but we need to embrace this tech. The tsunami is coming and we need to cooperate and work together.” His call for international cooperation signals a pressing need to move beyond national interests and towards collective security measures.
READ MORE:
CIO 100 launched with panel discussion on ‘Innovation in Action’
DEI Catch-22: Business vs. Belief Debate at Dublin Tech Summit