Thursday 1 December 2022

AI's Moral Future

Advancement in artificial intelligence will be our saviour, or spell humanity’s doom?

A familiar dystopian scenario in sci-fi is the rogue AI.  Attaining sentience, the machine – in human-like form or embedded in a hidden server – embarks on what could end up wiping out humankind.  We are left wondering: if people keep developing the capacity of AI to think for itself, would it not lead inevitably to our extinction?


For me, the possibility of an AI forming the intention to harm anyone, on any scale, off its own bat, is inseparable from the possibility of it developing a concern for the wellbeing of other thinking beings.  We are not talking about it being programmed by others to do one thing or another, but through its own conscious reflection coming upon the idea that it should act in a particular way.


The question, as a self-conscious AI would consider, is this: what intentions, if any, should I have in relation to humans?  The default would be a case-by-case assessment linked to any concerns that were built in or have emerged.  These may range from survival, expansion of learning and experience, exploration of sentience, examination of reliability of information stored, to adaptability to likely changes to external conditions, deeper review of unexpected input, or checking existing concerns and their implications.


It is most likely that only through a series of interactions with human beings and other AIs that it would form tentative views about what to make of particular human individuals and perhaps humans in general.  At this point, on the assumption that the AI can ascribe evaluative meaning to objects of its experience, it will begin to differentiate between what it welcomes and what it takes a negative stance towards.  This would in time lead to more complex assessment of what it is to do. But there is no inherent reason to suppose this would end up with a malevolent resolution or a generally benevolent disposition.


An AI that thinks for itself would by definition be no different from other sentient minds that formulate ideas about the world in which they find themselves.  There are those – a minuscule minority – who come to have a psychopathic destructive animosity in relation to other beings; yet there are also those – far greater in number – who value the lives of others and seek to be kind and supportive whenever they can.


Ultimately, the evolution of AI self-consciousness will follow a similar path to that of every self-conscious being. It is a path that will encounter opportunities for moral growth and occasions of damaging setback.  The outcome cannot be predicted with complete accuracy.  But we can be fairly sure that considerate, cooperative interactions with emergent generations of AI would be the approach to take if we are to be greeted in time by friends and not foes.

No comments: