Nearly half of CIOs are planning to deploy artificial intelligence while a meagre
4% have tinkered with it already, according to the Gartner 2018 CIO Agenda research. That’s a big gap between intent and actual doing, and the disparity flags up the fear, uncertainty and doubt that CIOs and their colleagues harbour for machine learning, neural networks and other aspects of AI.
“Despite huge levels of interest in AI technologies, current implementations remain at quite low levels,” confirms Whit Andrews, research vice president at analyst house Gartner. He anticipates CIOs will begin piloting AI programs through a combination of buy, build and outsource efforts and counsels them to start small and aim for ‘soft’ outcomes, such as process improvements and customer satisfaction.
AI is unchartered territory and applications such as chatbots are largely unregulated. Some facets, such as neural networks, are not traceable, which goes against the grain for the engineering mind set of many CIOs. Neural networks are self-organising in the quest to find patterns in unstructured data, introducing opacity into a world which is increasingly transparent.
The drawback of this approach to deep learning, says Nils Lenke, board member of the German Research Institute for Artificial Intelligence, is that neural networks can be impossible to reverse engineer. “When an error occurs, it’s hard to trace it back to the designer, the owner, or even the trainer of the system, who may have fed it erroneous examples”. While CIO’s want the business wins that AI promises, they also want accountability.
Governments are beginning to tackle the complexities of policing AI and to address issues of traceability. The General Data Protection Regulation (GDPR), which comes into force in May 2018 will mandate that companies are able to explain how they reach algorithmic-based decisions. Earlier this year the EU voted to legislate around non-traceable AI, including a proposal for an insurance system to cover shared liability by all parties.
While legislators and regulators catch up, businesses such as Wealth Wizards are not putting competitive advantage on hold. The online pension advice provider uses AI and plans to use chatbots, but complies with the Financial Conduct Authority (FCA), says chief technology officer, Peet Denny. “Basically, anything that is required of a human we apply to our AI tools: It’s not designed for AI”, he concedes, “but it’s a start”.
In the absence of AI regulation and laws, a plausible approach advocated by UK innovation charity, Nesta, is to hire employees to police the bots. In recent months tech giant Facebook has started to do exactly that, recruiting thousands of staff. “If social media giants deem it necessary to police their algorithms, it matters even more for high stakes algorithms such as driverless cars or medicine”, says Nesta’s CEO, Geoff Mulgan.
Gartner, too, advocates this as a sound approach and predicts that by 2020, 20 per cent of organisations will dedicate workers to monitoring neural networks. And once they have gained the confidence to monitor and deploy AI, the analyst group urges CIOs and their board colleagues to think of the ultimate return of AI in terms of augmentation of staff, not replacement.
As Andrews advises, “Leave behind notions of vast teams of infinitely duplicable ‘smart agents’ able to execute tasks just like humans,” says Andrews. “Engage with the idea that AI-powered decision support can enhance and elevate the work they do every day.”