Cog X 2019: AI ethics debate sidelined
Cog X 2019 is the Glastonbury of Artificial Intelligence (AI). The ‘Ethics’ tent was labelled as the event’s moral centre. That was the claim and the theory. But it was stuck out back and so it felt like an afterthought.
Under a bit of canvas over a patch of wet grass near Kings Cross station in central London, speakers and attendees were at the mercy of the elements.
The moral high ground was missing when it should have been a central focus for everyone. It felt out of sight and seemingly out of mind.
This was a strong and perhaps unconscious metaphor of how leaders are failing to face-up to the glaring faults in AI technology. The troubling problems and unresolved ethical issues are enormous. They require new regulations and careful implementation.
But the ‘Ethics’ tent’s marginal location at Cog X sadly confirmed, it does not generate money. Therefore it can be side-lined. This is shameful.
Yet if not tackled, moral issues will ultimately cost businesses and organisations vast sums of money, accompanied by a break down in trust.
We have already seen the start of that. In 2017 it was revealed that “racist” AI technology was being used to make risk assessments about offenders in the US
The moral debate about AI goes far beyond jobs being lost as workers are replaced by robots. Little has yet been done to address the bias inherent in the data used for AI. Indeed, it is barely acknowledged.
This can result in unfair decisions amplifying existing inequalities. Or recommendations can be just plain wrong and dangerous: IBM Watson Health’s Oncology AI gave out incorrect cancer advice
I listened to speakers in the ‘Ethics’ tent bravely delivering their speeches with fleece blankets wrapped around their shoulders. When I spoke to representatives at the Expo, it became clear that very little progress had been made on this well-recognised problem. Those who had come to Cog X to hear about new solutions were deeply frustrated.
There is a huge disconnect between leadership and the realities of AI. The danger is growing that in the rush to play up the potential and glamour of new technologies loaded with exciting buzz words and claims, C-suiters who make the final decisions will invest in “cutting edge” applications they don’t have the knowledge or even the time to begin to comprehend, let alone the full implications.
Leading organisations and companies could find themselves in a similar predicament to the US Justice system. It has been saddled with a useless piece of “cutting edge” technology which has cost them money, reputational damage and has hurt ordinary people along the way.
Morality and humility should be at the centre of all decisions made by leaders. That is why next year I would expect to see the Ethics tent located with high visibility at the centre of CogX. Never again must it be stuck out in the margins as it has been this week.
Rebecca Geach is employed as coordinator for Thinking the Unthinkable (TTU). She is also an active member of the ‘Women Leading in AI’ network. The group was established to address the bias within algorithms due to a lack of diversity and inclusivity in Artificial Intelligence (AI). She attended the Cog X conference in London for TTU on the first day, June 10th.