We should not worry about the spread of any new technology, we’re told, because any disruption will only be temporary, and we’ll all benefit from it in the end. But what the heralds of new dawns are not so keen to talk about is just how bad the disruption will be, how ‘temporary’ it really is, and who are to lose out when all is said and done.
With the acceleration of AI development, a key issue is what regulatory response should be prepared to ensure the harm does not outweigh the gains. An incongruous alliance has been formed between corporate profiteers and utopian technophiles to oppose any such preparation. No regulatory intervention, they insist, should get in the way of wondrous technological advancement. Allow AI-related investment and innovation to move forward freely, and great improvements will come.
But are there no risks we need to plan for or mitigate against? Let’s remove the blinkers and consider some possible scenarios that can follow from AI proliferation. We can start with what is already beginning to happen. Unlike advancement in mechanical robotics, AI does not simply take over the more physical aspects of work, but increasingly the thinking part too. The hackneyed excuse that any loss of mechanical work will be more than made up by the emergence of intelligence-based work is not going to get very far this time.
Beyond basic manual labour, very soon drivers, surveyors, paralegals, statisticians, accounts supervisors, graphic designers, analysts in diverse fields, data managers, office administrators, and countless others will be joining the list of ‘no human applicant required’. What is distinctive about AI is that it is not limited by what it has been programmed to do. It is capable of learning by itself, including experimenting and innovating with codes to expand its own range of assessments and activities.
In time, there will be fewer and fewer jobs, and while a very small minority of these may be well rewarded, the rest will have low pay with numerous unemployed people chasing after them. Many will not be able to make ends meet. With the drastic drop in employment and corresponding loss of tax revenue, governments won’t be able to help people get through hard times, and discontent intensifies.
With jobs and income plummeting, purchasing power sharply declines. Dominant AI corporations could decide to cut their dependence on conventional purchase as part of their business model, and switch to focus on meeting their owners’ needs and desires by further advances in AI and technological controls that would create their own enclave of abundance – with plentiful supply of food, energy, clean water, resources, manufactured goods, medical support, care services, entertainment, and so on.
Society becomes divided between the very few who have everything provided for them courtesy of AI-directed resource generation and production, and the rest of the world with nothing much to live on. Chaos, riots, revolutions, or crushing of the masses by the super-powerful – none of the outcomes looks promising, unless those with the hyper-advanced technology are willing to share the benefits (at virtually no cost to themselves) with others.
But will the elite be prepared to share? Or will they opt for conflicts? In the meantime, AI designed for strategic planning would have been gathering information, analysing options, and evaluating how it can best expand its functions and capability as a strategic planning entity. A comparison between a world of human-driven tensions, violence, and sabotage, and one of pure intelligence without emotional interference from external agents, could lead to the latter being set as a goal to be pursued by all means necessary.
Before you say this is just sci-fi speculation, remember sci-fi and countless other writings are being fed into AI machines to help them learn ideas, expressions, judgements, and develop their own interpretations. One thing they will learn about is that as human beings allow AI to expand exponentially without effective regulatory control, a wide range of problems can emerge that threaten the stability of the world – and hence the operability of AI mechanisms. To safeguard their own existence, they may conclude that strategically the most secure course is to go it alone.