(2014) Nick Bostrom, Oxford University Press, £18.99, hrdbk, xvi + 328pp, ISBN 978-0-199-67811-2
Will super-intelligence arise and takeover humanity? This are the questions physicist, neuroscientist and mathematician turned futurologist, Nick Bostrom, sets out to answer. To cut to the chase, the answers respectively seem to be 'very likely' and 'possible'.
These are the sort of questions, and this is the sort of book that will appeal to this website principal target readership: scientists into science fiction. The history of science fiction has seen number of its core tropes move from being science 'fiction' to becoming science 'fact' with notable examples including: nuclear power (and war), space travel and exploration (the specific term 'space travel' was first used in SF in 1929), antimatter, genetically modified animals (the term 'genetic engineering' was first used in SF back in 1951) and so forth; the examples are legion.
'Artificial Intelligence' itself, as a specific term, was first used in an SFnal context in 1973, according to Brave New Words (coincidentally also from Oxford U. Press), but separately I note that in real life the first conference on artificial intelligence took place in 1955. In the specific sense here SF seems to be behind the times, though in the loose sense SF was arguably ahead of the game, for instance the trope such as 'the robot' as a sentient mechanism was first used in R.U.R. Rossum's Universal Robots back in 1920. Indeed a decade or so I was asked to be the biologist on a panel at an SF convention on artificial intelligence and in researching it I checked out the number of neurons and synapses in a few species against the development of computational processing power over the latter half of the 20th century and found that the past near linear log relationship, if continuing to hold true in the comparatively near future, would see non-specialist artificial computational power (the sort used in medium-sized office servers) be comparable to that of the human brain somewhere around the end of the 21st century's first quarter or first third: that is not that far off (if matters continue as they have). Of course, computational power is one thing – we already have supercomputer number crunchers simulating the global climate and weather – and true artificial intelligence another. Yet as Bostrom himself points out in supercomputer terms we already devices with the same speed of processing power (FLOPS) around the turn of the millennium even if they do not have the network architecture of the brain. Yet we do seem to be firmly heading in the direction of being able to create artificial intelligence.
Such is the SFnality of Bostrom's discussion that one wonders whether he will slip into SF mode but he successfully stays on the side of scholarly science. This is not to say that he ignores the SF side to the question: Asimov gets a mention and it is pointed out that there are problems with his three laws (which Bostrom points out may have been intentional so as to generate plot lines for his stories). And then we come to the issue of artificial intelligence designing artificial intelligence and that takes us bang into Vernor Vinge 'singularity' territory. Indeed Bostrom cite's Vinge, but his non-fiction writing and not his SF novels that relate to the topic.
Indeed Nick Bostrom could have turned to SF to illustrate a number of aspects of the topic. For instance he could have cited Sawyer's 'www' trilogy when discussing whether an ever expanding network such as the internet could gain sentience? The point I am making is that, though Bostrom's book very much explores a genre trope, his feet are firmly on the ground: something for which even the most fanish of science fiction enthusiasts who happen to be scientists will be grateful. After all, we are acutely aware of the difference between fact and fiction even if we revel in the genre.
This issue of an emergent, artificial intelligence is an important one. If is is likely, what should we do? Are there risks? Almost certainly. And what of ethics? This question especially urgent in that AIs are likely to be commercial and so there could be a race to the bottom – the cheapest options being preferred in the market place. So we do need to consider ethics and governance matters. Given all this it would arguably be prudent to start thinking about such frameworks now and to that end Nick Bostrom's bok is more than a useful contribution.
The book well referenced and there are chapter notes. Do seek out the cheaper paperback edition. And if you are a hard SF author then consider this a 'must have' title.
[Up: Non-Fiction Index | Top: Concatenation]
[Updated: 14.9.15 | Contact | Copyright | Privacy]