Imagine the Machine won't stop
The future can't be predicted but imagination is a powerful thing.
My mum is the person with whom I talk about AI the most. For me, it’s a way of life. For her, it’s a way of worrying.
As she finishes Stuart Russel’s Human Compatible, she sends a photo of one of the pages (she’s a philosophical tech thinker, not a user!) to our family chat. My nephews are usually the recipients of my mum’s reading recommendations. She knows she won’t be here to witness and suffer or benefit from all the dramatic changes that are coming, but they will. So they should be informed and prepared to participate, in a positive way.
AI aside, my mum is doing the same she has done to me, opening horizons. Think wider, think deeper, think with questions and humility. “You see the steering wheel I’m holding?” she said in one of the many car trips from school. “What we see might not be what it is. These atoms that we see as a wheel, might have other shapes that we cannot perceive”.
But I digress. I was talking about the page my mum shared with a reference to Forster’s The Machine Stops, a dystopian tale published in 1909 that warns against the dangers of over-reliance on technology, raising themes of isolation, loss of human connection, and the consequences of a society detached from nature.
Pretty visionary stuff. Which got me thinking: if last century’s science fiction was able to anticipate so accurately what we’re going through and living now, could we use the science fiction stories of this century to imagine the future of AGI and prepare for its risks?
People often use the excuse that it’s impossible to predict what AI will bring and how it will affect us all, just to avoid thinking about it. It is a fact. No one can see the future. But we can imagine it.
Think about the example of the three laws of robotics, rules developed by science-fiction writer Isaac Asimov, who sought to create an ethical system for humans and robots.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The laws first appeared in his short story “Runaround” (1942), decades before they could even happen. And have since been included in scientific AI textbooks.
This proves that science fiction in particular and imagination in general, can be a powerful tool to drive progress and protect us from it.
So let me leave you with a summary of seven other books that can inspire us to think ahead on the topics of Human-AI relationships, power, equality, surveillance, autonomy, and identity.
These might help us all imagine that the machine won’t stop, and also imagine ways of making it good for us. Do not worry, Mae. We’ll get this.
Enjoy your reading and thinking.
"Children of Time" by Adrian Tchaikovsky explores the consequences of human interference in evolution and the complexities of coexistence between human and digital species raising questions about power, the ethics of scientific experimentation, and our relationship with nature.
"Speak" by Louisa Hall explores themes such as the nature of consciousness, the power of language and communication, and how technology shapes human relationships. The novel raises thought-provoking questions about the potential risks and rewards of creating sentient artificial intelligence and the enduring need for connection and understanding in an ever-changing world.
"Walkaway" by Cory Doctorow depicts a future society split by governance systems, raising questions about the nature of work and leisure, prompting reflection on resource distribution by probing into inequality, surveillance, and the potential of decentralized societies to redistribute power.
"Autonomous" by Annalee Newitz critiques the moral dilemmas of intellectual property rights in pharmaceuticals amidst the convergence of human and machine identities and raises questions about autonomy, identity, and the commodification of life and labour.
"Accelerando" by Charles Stross depicts a reality of rapidly accelerating technological change and post-human intelligence and explores themes such as transhumanism, virtual reality, and the blurring of boundaries between humans and machines.
"Klara and the Sun" by Kazuo Ishiguro, explores themes such as the ethical implications of intelligence amplification technology, equality, the nature of humanity, and the quest for connection and meaning in an increasingly technologically advanced world.
"The Lifecycle of Software Objects" by Ted Chiang, explores the ethical, emotional, and practical challenges of raising and interacting with intelligent digital entities. The book raises questions about the nature of consciousness, the bonds between humans and non-human entities, and the responsibilities of creators toward their creations.