There is a strong case to be made that artificial intelligence (AI) is now the most central topic in technology. While the computer science that underpins AI has been in development since the 1950s, the rate of innovation has gone through multiple step changes in the last ten years.
The technological reasons for this are well understood: the advent of neural networks; an increase in semiconductor processing power; and a strategic shift away from AI systems that rely on parameter-driven algorithms towards self-reinforced and multiplicative learning, machines that get smarter the more data they are fed and scenarios they negotiate.
Development has been open and collaborative. The benefits of AI in process efficiency and, potentially, accuracy are clear. For this reason, R&D activity, pilots and commercial deployments stretch to virtually every sector of the economy from healthcare to automotive manufacturing to telecom networks. A recent Vodafone survey indicated a third of enterprises already use AI for business automation, with a further third planning to do so. Take-up on this scale, at this rate, could put AI on a level with prior epochal shifts of electricity, the combustion engine and personal computing.
Two sides to each coin
Whether that actually happens depends on how the technology is managed. I spend a lot of time talking with major telecom and technology companies. While it’s clear AI is a major point of interest to nearly everyone, the discussion is still pitched in generalities. Paraphrasing:
AI is the Fourth Industrial Revolution
We know AI is big and we want to do something with it, but we don’t know what
We’re moving to be an AI-first company
How can we win with AI?
We’re a far more efficient company because of AI
The ebullient tone is to be welcomed.
Far less talked about, however, are the ethical and legal implications that arise from trading off control for efficiency. It’s fairly clear that cognitive dissonance is at work – the benefits blind us to the risks.
How do you answer these?
A crucial faultline is the balance between programmed and interpretive bias. That is to say, how much are machines programmed to act based on the way humans want them to act (reflecting our value sets) versus their own learned ‘judgement’? This has a direct bearing on accountability.
To make this point, let’s pose a series of questions that draw on how AI is being used in different industries.
Autonomous vehicles
If a self-driving car faces the inevitability of a crash, how does it decide what or who to hit? If that same self-driving car is deemed to be at fault, who bears responsibility? The owner? The car manufacturer? A third-party AI developer (if the technology was outsourced)?
Criminal justice
If an algorithm is tasked with predicting the likelihood of reoffending among incarcerated individuals, what parameters should it use? If that same algorithm is found to have a predictive accuracy no better than a coin flip, who should bear responsibility for its use?
Social media
If Facebook develops an algorithm to screen fake news from its platform, what parameters should it use? If content subsequently served to people’s news feeds is deemed intentionally misleading or fabricated, does responsibility lie with the publisher or Facebook?
I chose these for a number of reasons. One, these are real examples rather than hypothetical musings. While they emanate from specific companies, the implications extend to any firm seeking to deploy AI. Second, they illustrate the difficulty in extracting sociological bias from algorithms designed to mimic human judgement. Third, they underline the fact that AI is advancing faster than regulations and laws can adapt, putting debate into the esoteric realms of moral philosophy. Modern legal systems are typically based on the accountability of specific individuals or entities (such as a company or government). But what happens when that individual is substituted for an inanimate machine?
No one really knows.
A question of trust
Putting aside the significant legal ramifications, there is an emerging story of the potential impact on trust. The rise of AI comes at a time when consumer trust in companies, democratic institutions and government is falling across the board. Combined with the ubiquity of social media and rising share of millennials in the overall population, the power of consumers has reached unprecedented levels.
There is an oft-made point that Google, Facebook and Amazon have an in-built advantage as AI takes hold because of the vast troves of consumer data they control. I would debunk this on two levels. First, AI is a horizontal science that can, and will, be used by everyone. The algorithm that benefits Facebook has no bearing on an algorithm that helps British Airways.
Second, the liability side of the data equation has crystallised in recent years with the Cambridge Analytica scandal and GDPR. This is reflected in what you might call the technology paradox: while people still trust the benevolence of the tech industry, far less faith is placed in its most famous children (see chart, below, click to enlarge).
[1]In an AI world, trust and the broader concept of social capital will move from CSR to boardroom priority, and potentially even a metric reported to investors.
This point is of heightened importance for telecom and tech companies given their central role in providing the infrastructure for a data-driven economy. Perhaps it is not surprising, then, that Google, Telefonica and Vodafone are among a vanguard seeking to proactively lay down a set of guiding principles for AI rooted in the values of transparency, fairness and human advancement. The open question, given the ethical questions posed above, is how actions will be tracked and, if necessary, corrected. Big questions, no easy answers.
– Tim Hatt – head of research, GSMA Intelligence
The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.
[1] https://www.mobileworldlive.com/wp-content/uploads/2019/01/Jan-31-GSMA-AI-ethics-blog-Jan19-v2.docx-Chart.png
Recent Comments