Non-Fiction Reviews


These Strange New Minds
How AI Learned to Talk and What it Means

(2025) Christopher Summerfield, Penguin/Viking, £22.00, hrdbk, 373pp, ISBN 978-0-241-69465-7

 

Right off the bat, Christopher Summerfield has impeccable credentials when it comes to talking about artificial intelligence (AI). After all if you can’t trust the Professor of Cognitive Neuroscience at the University of Oxford, Department of Experimental Psychology, and a Research Scientist at Google’s Deepmind then who can you?

At its core this is a book about the Large Language Models (LLMs) that underpin the vast bulk of the recent crop of Generative AI tools in the public consciousness. Moreover, he makes a compelling argument that AI powered chatbots are doing more than just mimicking humans, faking intelligence but not really intelligent, he sets out that they can reason within certain specialised domains, and in some tasks they can, in fact, outperform most humans.

The book starts by taking us on a journey through the potted history of Artificial Intelligence, bouncing historical anecdotes and pop-culture references off elements of psychology, philosophy and data science, ending up at modern LLMs and showing what went wrong along the way, and why, and explaining what is was that made ChatGPT such a breakout success.

The giant leap came when, instead of trying to build AI based on structured models of the world, we created systems that could learn by hoovering up masses of unstructured data from which they learned to recognise patterns. In the case of LLMs, that meant simply feeding pretty much everything that has ever been published on the internet into evermore sophisticated prediction algorithms. Alongside this Summerfield posits that while LLMs may not actually think in the same way that humans do they are far more than the digital smoke and mirrors that many claim. In fact, the way they perform now already looks a lot more like thinking than is comfortable for many - and they’re getting better by the hour.

While Summerfield agrees that the prediction algorithms that power chatbots are error-prone and different from humans in myriad ways, he also believes that human brains are more like LLMs than we care to admit, and chatbots, he continues, have the ability to hold cohesive, coherent and convincing conversations on any topic they have been trained, demonstrating not just knowledge of what words go together, but also what ideas go together.

It’s interesting stuff, but when someone with a background in Cognitive Neuroscience, who works at Google’s Deepmind, and is research director of the British AI Security Institute, says that LLMs are more than just a clever copycat, you can’t help thinking “Well he would, wouldn’t he...”

What he fails to adequately address in the book is how chatbots – despite their seeming intelligence – can still spout reams of nonsense on more than the odd occasion, and without human-in-the-loop checks and balances we risk users placing blind trust in their output with what could be serious consequences. He acknowledges that LLMs don’t know what they don’t know, which is a problem, but he also doesn’t address the ramifications of freely available tools that can spread lies, fake news and propaganda, at massive scale, with no ability to recognise or correct those mistakes.

Yet Summerfield is keenly aware of the fragile foundations of Artificial Intelligence. AI can write flowery prose describing the feel of tree bark but it has never touched a tree. It can predict what a human response would be, but it is not human and cannot truly think like one. But still he believes that AI systems will not only eventually manage the entire collective memory of humanity, but more than that, they will sift through it and produce new thinking, new ideas, new insight, that humans may never reach.

But for all this trumpeting, he does point out that consciousness remains elusive not just for AI, but for cognitive science as well. We struggle, he says, to assess the consciousness of an octopus, let alone a neural network, so for now we should concentrate on the urgent non-technical questions, calling for stricter oversight, regulation, ethics monitoring, and accountability.

If you’ve read a few other books on AI then this one should be on your to-be-read pile somewhere. While the author, being a leading authority on AI, may appeal more to academics or serious minded business folk, this book is aimed more at the average reader, explaining complex ideas in simple language by breaking them into bite (byte?) sized pieces and having fun with them along the way. That said it does stray into the occasional slightly over-technical ramble now and again but mostly, he reigns it in well.

As an aside, Summerfield has made a number of his lectures available online for free. If you’re interested in how he thinks, they’re worth exploring.

Rob Grant

 


[Up: Non-Fiction Index | Top: Concatenation]

[Updated: 25.9.15 | Contact | Copyright | Privacy]