Artificial Intelligence (A.I.) has been in the headlines regularly in the last months. Many leading scientists and corporate officials in high tech industries have made statements regarding the dangers it poses. For example, Dr. Geoffrey Hinton, a computer scientist seen as the godfather of A.I., announced his retirement from Google, saying he regrets his work in creating artificial intelligence. An open letter signed by more than 30,000 people in the computer industry, has called on a freeze on developing newer, better A.I. Two companies that research A.I., OpenAI and DeepMind, have issued a statement that “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Clearly there is a fear of this new technology, even among those in industries who serve to make billions of dollars by developing AI.
The statements being made by scientists and industry insiders, at their most extreme, warn of the slim possibility of a runaway technological disaster, like a science fiction movie in which the robots turn on human beings or otherwise disrupt society. However, there are many more likely dangers on the horizon.
A.I. is already being used to drive social media, to determine what a user sees or doesn’t see. It is highly likely that politics in the coming elections will be shaped by messaging on social media, and AI gives the manipulators a more powerful tool with which to shape their messages. These could include racist, anti-science, anti-vaccine, anti-immigrant messages, and other disinformation. Deepfakes, which are A.I.-fabricated videos, could show politicians and public figures convincingly saying and doing things that they haven’t said or done.
A.I.-driven systems are able to perform many jobs that people currently do, especially repetitive processing and writing of information. According to a study by Goldman Sachs, in the coming years, A.I. will replace 300,000 jobs. The company OpenAI estimates that “80 percent of the U.S. work force could have at least 10 percent of its work tasks affected by Large Language Models (LLMs) and that 19 percent of workers might see at least 50 percent of their tasks impacted.” Today, in the Hollywood industry strike, writers, directors, and other entertainment industry workers are demanding provisions that protect them from being replaced by A.I.
As A.I. is used in data processing and human resources there is a danger that it will apply the same prejudices as the culture that created it. A.I. systems are developed using massive amounts of information databases – images and writing taken from the internet. Studies have shown that A.I. is racist, sexist, and prejudiced just like the information that fed it.
The threats posed by A.I. raised by experts and industry insiders all make one major assumption. They assume that A.I. will be put to use by corporations and warring states, driven by competition to use technology against each other. They also assume a capitalist economy which uses any new technology to more efficiently manipulate people, whether for profit or to support politics that benefit the wealthy. The experts and insiders are right, in the context of this society. A.I. will be used to speed up work, lay off more workers, and manipulate people. The real danger is the system we are living in. It guarantees that A.I. will be put to as many uses as possible to maximize profits for the wealthy elite, and it may go spinning out of control in their hands. Perhaps the power of A.I. could be harnessed to reducing hard work, improving people’s lives, and benefiting humanity if the poor and working class had a say in how it was used. But A.I. in the hands of the capitalists is a guaranteed dystopian nightmare.