The Daily Mail lead with a headline the other day that “AI could wipe out humanity” (cue far too many obvious jokes at the Mail’s expense). It’s long been the stuff of Science Fiction. Some of us grew up on films and stories about robots taking over the world. Things that looked impossible only a few years back seem much closer to reality now. Science is gradually aligning with science fiction.
The Daily Mail headline refers to concerns raised by scientists and engineers about how Artificial Intelligence could be used in war settings. The primary issue here is that action becomes distanced from human decision making and that does two things. First, there’s a disconnect between human suffering and the decision makers, this is already happening with the increased use of drones meaning that weapons are launched by people sat many miles away. Second, there’s the issue that Artificial Intelligence might lead to pre-programmed decision algorithms leading to a catastrophe with no opportunity for human intervention to rethink things.
For example, imagine the situation with Russia and Ukraine now. There have been times during this conflict where a computer algorithm might think that lines have been crossed on either side creating the conditions for a nuclear strike. The algorithm might say yes, when in fact a bit of fuzzy human thinking says “hold off”.
There are therefore good reasons to think carefully about how AI is deployed and not just relating to warfare. Imagine if the decision about continuing or discontinuing medical care fell to a computer algorithm.
However, there are two reasons why AI will not end human life, either an individual life or the whole of humanity. First, because I would be cautious even about the concept of artificial intelligence. In reality, there isn’t AI, autonomous from human intelligence. If a computer is running with an algorithm which enables it to make decisions and offer responses, then in fact it is using the human intelligence which set up the programme. AI enables us to pool knowledge and wisdom, it allows us to extend the reach of our intelligence but I doesn’t actually create independent living beings with true sentient intelligence. Now, that reason on its own isn’t really encouraging because it leaves responsibility in the hands of humans, those most likely to use power for evil. If a nuclear war happens and there is worldwide devastation, it will be because we as human beings set it up as a legitimate possibility.
The second reason however is a source of comfort. AI will not wipe out human life because the decision about when this world ends lies with God alone. Now, he might opt to use AI as the direct cause of that end when it comes but the end will happen because it is in his time and in his will. Note too that even this would not lead to the wiping out of humanity because we look forward to resurrected life in the New Creation.
We should think responsibly about decisions concerning technology. We have ethical responsibilities but we don’t need to make those decisions from a place of fear.
1 comment
Comments are closed.