I’ve been around long enough to remember the first ‘AI winter’ of the mid-late 1980s… This period was characterised by disappointment in the ability of AI (expert systems, artificial neural networks, etc.) to deliver on the initial hype…  why might the recent upsurge of interest in AI be any different?

The most distinctive characteristic of the current AI revival has undoubtedly been deep learning, and while novel artificial neural network structures (convolutional, recurrent) have emerged, these are not that different from the neural networks of 20 years ago – so what has changed?

Two major things: increased compute power and very large data sets. As the name suggests, “deep learning” requires data sets to learn how best to make predictions, and such massive labelled data sets have only become available over the past 20 years. The fact that language translation has moved forward in leaps and bounds during the last decade (after 50 years of limited progress using classical natural language processing) is primarily due to the fact that large sets of documents, available in parallel languages, have become readily accessible. Peter Norvig and colleagues at Google refer to this progress as being due to, “the unreasonable effectiveness of data”.

In a similar way, image recognition has greatly benefited from large sets of labelled digital photographs. In particular, the effectiveness of facial recognition software, for good or ill, has been driven by the vast number of labelled human faces on Facebook, Google Photos, WeChat, Weibo and other social media platforms.

Compute power has also greatly increased over the past 5-10 years. In particular, the types of compute (using graphics/tensor processing units) that can support the millions of complex computations required by deep learning algorithms.

So what might go wrong?

Do we really know that ‘intelligent’ thinking is based on the manipulation of complex non-linear algebra? What about domains where it is difficult to find large data sets (e.g. the limited machine translation available for minority languages) or where labelling is inherently uncertain/subjective? While the emergence of Big Data has provided major opportunities, we also know that vast data lakes can easily become data ‘swamps’ and that many challenges remain in managing unstructured data. We have only begun to address the ethics of decision making, sources of bias within the data used for learning (social, ethnic, gender), the explain-ability of algorithmic black boxes, and many other issues.

HeraldScotland:

As such the latest ‘AI spring’ still has some way to go before it delivers on many of its potential benefits.

Being involved in discussions around these and other issues that affect the adoption of AI across a range of business sectors is one of the key motivations for myself and colleagues from the University of Strathclyde to be involved in this exciting outreach opportunity.

Prof. Crawford Revie is one of a number of panel members joining the discussion at The Herald AI Business Breakfast supported by Cathcart Associates, Incremental Group and University of Strathclyde.

To secure your place visit here or contact Kirsty Loughlin on 0141 302 6016 or Kirsty.loughlin@newsquest.co.uk