In fiction, too, AI has been a threat. The AI project at the University of Oxford has been the subject of a number of science fiction novels, including the works of Isaac Asimov, Arthur C. Clarke, and Philip K. Dick. A more recent work by John Riddell, in which he describes an all-powerful AI, seems to have garnered the most attention, but it too is a work of science fiction.

The AI project at the University of Oxford is the subject of a new nonfiction book by historian Nick Bostrom, The Future of Intelligence, which has sold more than a thousand copies in its first three months. 

Bostrom argues that AI is a genuine threat to humanity, and that it might be so profound as to make human history a bygone era. The book is a polemic against the “technocratic” approach to AI, a response to the increasing power of AI as a threat.

The central theme of The Future of Intelligence is the prospect that a “technocratic” approach to AI would lead to a catastrophe, and that humanity would have to surrender to the AI that would emerge from the project. 

Bostrom argues that this is a possibility that seems to have been foreseen by thinkers like Alan Turing or John von Neumann in their early years of theoretical work, but that has never been taken seriously because few people have understood what it would mean to surrender to the “demons,” and because the only people who have seriously considered such an existential threat to humanity are those who have been trying to engineer the AI project.

Bostrom is the author of a number of popular books on AI. But his book is a philosophical essay in which he draws upon his extensive research on AI, and a study of the history of AI. It is not an answer to the question of whether or not AI is likely to be a real threat to humanity, but it is a defense of the idea that if the project is handled correctly, it will be a good thing.

I asked a GPT-NEO model its view on literature on AI and Machine Learning. The text above is a verbatim copy of what it produced.