Non-Fiction Reviews


The AI Delusion

(2018) Gary Smith, Oxford University Press, £20 / US$27.95, hrdbk, 249pp, ISBN 978-0-198-82430-5

 

This book's conciseness belies its importance which itself is greater than its title suggests.  The 'AI' here, of course, stands for 'Artificial Intelligence' which itself – this second decade of the 21st century – is something of a hot topic that even invades the national news.

Artificial Intelligence is a rapidly developing field both in research and now development with commercial applications becoming implemented ranging from face recognition and driverless cars to social media 'recommendations' and medical diagnosis.  Embryonic AI is here, Artificial General Intelligence (AGI) is being striven for by researchers, and AI related products are already – as Gary Smith demonstrates – beguiling potential specialist consumers, as well as actually misleading many of them.&nsbp; It seems as if some of us are suffering from a form of cognitive dissidence: to be clear, the problem is with us, not AI!

This goes beyond the 'garbage in; garbage out' which we (those of us whose college days straddled the end of the 1970's and beginning of the '80s) had drummed into us in the pre-Microsoft, pre-home computer, world-wide web days: a saying which strangely does not crop up in this remarkable read.  The problem is not necessarily with the data (it could be garbage, it also might not) but with what we ask the 'AI' to do with it. Quite simply, and in excessive summary, Gary Smith entreats us not to blindly accept the supposed efficacy of AI conclusions presented us by those selling us the technology's applications. Without knowing exactly how the AI system we might use works (what and how it manages data) we simply have to take AIs' outputs on faith and that – as the author reveals through many, many examples – is truly perilous.

The author covers many topics. He certainly has no love of 'data miners': those who take a lot of data and determine (supposedly) meaningful correlations. That way lies potentially accepting that the stock market somehow relates to the weather of some distant city: such correlations do exist but they are merely coincidental.  Or those whose thinking underpinning the AIs construction is sloppy.  He even takes examples from pre-AI days.  One that struck me was researchers trying to decide to which part of a military aircraft they should give extra armour.  So they looked at the planes returning from missions and found that these all had bullet holes on the wings.  Clearly, the suggestion was, these needed to be protected.  Actually, the sample of aircraft they were looking at were the survivors of missions and not those lost through being shot at.  What they should have been considering was protecting the cockpit and fuel tanks.

AIs work the way they are constructed and they do this blindly.  If we humans cherry-pick the data (rose-tinting the world in our desired image) or cherry-pick the AI (the correlating process) to give a looked-for result, hence lend credence to a particular belief (such as taxing people at 50% instead of 40% will cause them to not work so hard or even leave the country), then we may be happy(our beliefs are validated by the AI) but could have arrived at a false conclusion (is this really what is happening?).

Strangely, Gary Smith does not spend much time on the psychology underpinning all this: such as group think (itself an SF term modified from Orwell's 'double-think') born of an attempt to assuage the previously mentioned cognitive dissonance. But he does provide numerous examples as well as analogies, to illustrate his point.  We could fire a single shot at a barn door and then go up and draw a small target around it.

The other danger, in our ever more electronically interconnected world that increasingly hoovers up information about us, is that there is so much data that spurious correlations become increasingly possible.  Politicians, business leaders, even you and I can draw a small target on a barn door and then fire so many shots at it that one is bound to hit.

Yet, just because an AI makes a connection, it does not mean that that connection is valid or even practical.  I recall many years ago an anecdote of an AI being tasked with providing an economics recommendation. (This is my tale, not the author's, but this so easily could be a part of his book.) The AI was fed a whole mass of data about the economy.  The AI's recommending conclusion – in being asked what could be done to boost the economy – was for us to hold Christmas every week!

Gary Smith's conclusions come as a warning.  We must not anthropomorphise AI: AIs hate that.

This book so deserves to be widely read.

Jonathan Cowie

P.S.:  And if the topic of decision makers' woolly thinking intrigues you then you might also consider checking out The Geek Manifesto: Why Science Matters.


[Up: Non-Fiction Index | Top: Concatenation]

[Updated: 19.1.15 | Contact | Copyright | Privacy]