AI Tribes

‘AI will never be as basic as it is now’ – AI Tribes explored challenges & opportunities associated with the burgeoning technology

By Business & Finance
26 February 2024
Pictured: Delegates taking a moment in between sessions at AI Tribes in the Science Gallery, Trinity College Dublin on Friday 23 Feb. 

‘AI will never be as basic as it is now. It’s the optimum time to learn about its capabilities, how it can benefit your company and develop robust regulation and guardrails’, was the general consensus of the expert speakers at the AI Tribes event which took place on Friday 23 February at the Science Gallery in Trinity College Dublin.


The one-day tech conference, focused on Artificial Intelligence, Machine Learning and the Data Science landscape, attracted hundreds of delegates keen to hear more about this new and exciting technology. For every doomsday scenario posited there was a reasoned counterbalance  presented with benefits to companies and the business landscape alike.

Featuring talks from over 60 speakers, there was a palpable energy at the event where tech leaders emphasised that AI, currently in its infancy, would never be as basic as it is today and that now is the optimum time to learn about its capabilities and to create robust regulation to govern the burgeoning industry.

Sebastian Haire, Head of Strategic Partnerships – Startups, UK/I at Google opened proceedings and spoke about the transformative power of Generative AI and how Google Cloud enables successful implementation. He shared how organisations can best assess their AI readiness to gain a competitive edge.

Demo at AI Tribes in the Science Gallery, Trinity College Dublin

Copilots are not autopilots

In a session entitled, AI’s Next Leap for Enterprise, Kieran McCorry, National Technology Officer, Microsoft; Stephen Redmond, Director, Head of Data Analytics & AI at BearingPoint and Bronagh Riordan, Head of Data & Analytics for Primark explored how emerging AI technologies are being integrated into established businesses. Speaker Chair Barry Downes, Managing Director, Sure Valley Ventures asked the panel about insights into the synergies between large-scale enterprises and agile startups.

Pictured (from l-r): Kieran McCorry, National Technology Officer, Microsoft; Bronagh Riordan, Head of Data & Analytics for Primark; Stephen Redmond, Director, Head of Data Analytics & AI at BearingPoint and Speaker Chair Barry Downes, Managing Director, Sure Valley Ventures

So you’ve the most transformative technology in 600 years and it has the propensity to be so profoundly disruptive, you need to have guardrails.

The panel discussed the benefits of copilots, the Microsoft AI software that offers solutions to users across the Microsoft Cloud, making some complex tasks more manageable, and Kieran McCorry, Microsoft, emphasised that they are not autopilots. He said, ‘They are not meant to replace you, as a human, doing useful, valuable work.’

He noted that when startups are resource constrained, the copilot can prove very useful to help with the marketing and branding aspects of the business.

McCorry also spoke about copilot-written code saying, ‘About 15-60% of the code we (at Microsoft) write for a product today is drafted, at least initially by some kind of artificial intelligence agent, and the humans just kind of finesse that then at the end of the day.’

Networking at AI Tribes

Opportunity and Regulation

Bronagh Riordan, Primark and a member of Ireland’s inaugural AI Advisory Council spoke about the growth opportunities for the industry here. She said: ‘I know the government certainly has an ambition to have Ireland as a leading country in the EU and globally and with our expertise and our test bed here of organisations, we’re certainly well placed to do that.’

She added, ‘There’s a chance here to look at what the opportunities are, what the risks are and how we balance those, how we upskill our workforce by education, training, reskilling or upskilling.’

Riordan also spoke about initiatives currently available. ‘There are funding opportunities, investment opportunities, training and education and all of the elements around new and novel use cases which is really exciting.’

In terms of policy, Riordan said there was a need for trustworthiness and transparency and how organisations can help consumers understand how and why AI technologies are going to be utilised and what the impact is.

Kieran McCorry picked up on this and said that from a Microsoft perspective, the risk-based approach taken by the AI Act was the right model.  ‘I think the notion that they have of prohibited use cases, high risk use cases and then low and minimal risk is the right approach. From a Microsoft perspective, since 2017, we’ve been talking about that being a right approach to take. It’s been reflected in the principles that we’ve developed. We’ve had responsible AI principles from 2017, based on fairness, accountability, transparency and inclusivity really well aligned with the recommendation from a high level expert group.’

McCorry noted that Microsoft President, Brad Smith, had described AI as being the most transformative technology since the invention of the printing press in 1436. ‘So you’ve the most transformative technology in 600 years and it has the propensity to be so profoundly disruptive, you need to have guardrails.’

Stephen Redmond, BearingPoint, spoke about deep fakes. ‘People just won’t trust media anymore. Deep fakes are less of a problem for internal applications but anytime anything goes external, people will not know what to trust. We’re seeing so many stories about chatbots and deep fakes.’ He added, ‘For industry, picking the right use case is so important…there’s a lot of change management and careful thinking around how people will use the products.’

 AI and Bias

Another illuminating session was held on the subject of Balancing the Equation: Ethical Strategies for Bias-Free AI where Andreea Wade, VP of Product Strategy, iCIMS spoke with Chandler C. Morse, Vice President, Corporate Affairs, Workday. They covered strategies and best practices necessary to create AI systems that are not only technologically advanced but also free from bias and looked at the challenges of identifying and eliminating bias in AI algorithms.

Pictured (from l-r): Chandler C. Morse, Vice President, Corporate Affairs, Workday and Andreea Wade, VP of Product Strategy, iCIMS

‘Let’s start with how transformative AI has been in the HR space,’ said Morse. ‘When you look at skills data for any organisation, there is a tonne of data. It’s changing all the time and it’s spaghetti’d together. This type of data is ready made for AI. It’s what is was developed to solve. So we’re able to take this data and take the spaghetti apart and draw inferences.’

He added that using a different, AI and user-led informed approach, it’s easier to extract the right data and match it to the right person/job/company/platform.

‘From a public policy perspective, I’ve worked on entry level workers for considerable part of my career. If you’re applying for a job an an entry level worker, you apply for 50, 100, after you put the kids to bed, when you’re exhausted, when you’re in the worst place to advocate for yourself and you just put down the first few skills you can think of. Whereas, when you apply AI to the skills approach, you can have a user-controlled experience.’

Global Challenges

The morning sessions continued with a joint fireside on the subject of AI Solutions to Global Challenges.

Joining Moderator George Maybury, Enterprise and Public Sector Director, Dell Technologies was Marie Toft, Founder, Emotionise AI, and Dr. Irina Mirkina, Innovation Manager – AI Lead, UNICEF.

Pictured (from l-r): Marie Toft, Founder, Emotionise AI, Dr. Irina Mirkina, Innovation Manager – AI Lead, UNICEF, and George Maybury, Enterprise and Public Sector Director, Dell Technologies

The session successfully demonstrated how enterprises across various sectors are deploying AI technologies to create or augment solutions for environmental sustainability, healthcare, education, and social equity. 

“We are all interested in new technologies because of the value they contribute to the work that we are doing or the challenges that we need to solve,” said Dr. Mirkina.

She noted that UNICEF is engaged in so many things, and AI can assist with a significant portion of the work.

One example she cited is with children with disabilities, or children in areas where it is difficult to enhance literacy.

“They have problems accessing textbooks,” she said. “They have problems accessing critical information, including government information, including information in humanitarian context where this could be a matter of life and death.”

Dr. Mirkina suggests that assistive technologies, like speech to text and text to speech, like converging symbols to text which expands a symbol into a full sentence, and machine translation, can all help affected children.

“All of this is valuable for education and is valuable for improvement across all sections from health to humanitarian response.”

Marie Toft spoke about her experience working in media and journalism, and discussed the doom-laden stories surrounding AI.

“The biggest threat … When it comes to negative things about AI [is] still people. It’s not the AI, AI will only do what you tell it to do.

Toft is very passionate about where we go in terms of legislation: “Guardrails around this, what is an incredibly powerful technology and there’s really interesting things happening.”

Disinformation

In a striking and timely afternoon session titled Disinformation Is Real: Solving the Deep Fake Dilemma, moderator Adi Gaskell, Innovation Researcher and Writer at Forbes, the Huffington Post and the BBC, guided panellists Ezi Ozoani, AI & Ethics Research Engineer, Hugging Face,  Dr Brendan Spillane, Assistant Professor School of Information and Communication Studies, UCD and Funded Investigator, SFI ADAPT Centre, and Dan Purcell, CEO and Co-Founder, Ceartas.

Pictured (from l-r): Ezi Ozoani, AI & Ethics Research Engineer, Hugging Face, Dr Brendan Spillane, Assistant Professor School of Information and Communication Studies, UCD and Funded Investigator, SFI ADAPT Centre, Dan Purcell, CEO and Co-Founder, Ceartas, and Adi Gaskell, Innovation Researcher and Writer at Forbes, the Huffington Post and the BBC

The session explored technological advancements around deepfake technology and the challenges they pose in the spread of disinformation. The panellists discussed the latest detection measures and countermeasures currently being developed to combat digital deception.

Ethical implications, legality, and societal impact of such technology was discussed at length.

Gaskell noted that half of the world is having elections this year, including Ireland where we will see European elections and, potentially, a general election.

“With that many elections, there’s all sorts of potential for things to go wrong ordinarily,” he said.

Purcell stated that he’s not quite convinced that the focus on political disinformation is where we need to be at the moment, instead suggesting the focus should be on its impact on individual users.

“What we’re seeing is manipulation in terms of outside of politics,” he said.

“You’re looking at the celebrity brand misuse and abuse, then you’re also looking at individuals. So people who are just on Instagram, people who are just living a normal life who are being deep-faked and interfaced into things like pornography … That’s quite alarming.”

Dr. Spillane spoke, noting that he does believe that there is a serious risk to society, to elections, and to the integrity of elections from disinformation.

He also discussed a case in Hong Kong in which an individual, believing that they were having a conversation with their manager and financial director, transferred $25 million to another company.

“So it’s essentially combining the phishing or the spear phishing, targeting of emails and invoice misdirection … You think you’re having a conversation with your chief financial manager or your company finance director.”

Ozoani spoke about how deepfakes and AI can augment scams that have existed in some form or another since the introduction of the internet.

“[You can] garner sensitive textual information somehow online. Now you can just parse that into a text-to-speech bot … You can now synthesise their voice and then that’s a step further.”

ACCESS AI TRIBES ON-DEMAND VIDEOS NOW

Partners

Partners for the event included Integral Ad Science, Google, Bearing Point, Skillnet, ICON, Workday, Optum, Presidio, Adapt, FTI consulting, Version 1, SureValley, Sigma Software Global Logic and Accenture.

For more information, you can see the AI Tribes website and contact brianh@dublintechsummit.com.


Read More:

How to democratize AI 

The Future of Work with AI disruption at its core