“Artificial intelligence is not human and does not think”, writes Bjørnar Tessem from the University of Bergen.
A Google engineer claimed last year that the program he was communicating with was conscious. ChatGPT has since led to public speculation about malevolent computers becoming so intelligent that they replace humans.
The high quality of texts generated by ChatGPT foster the perception that machines are intelligent and can think. Being human is deeply connected to language, and we use language as a tool for thinking. Based on this, many conclude that ChatGPT “thinks”, “understands”, and has “consciousness”.
Inga Strümke writes soberly about AI in her book, Machines That Think (original title: Maskiner som tenker), but the metaphor of thinking is central in the title, and the cover features an evil eye resembling HAL2000 from the movie 2001: A Space Odyssey. Professor Jill Walker Rettberg writes on nrk.no that when ChatGPT generates texts that have no basis in fact, it is a compulsive liar. Does ChatGPT derive pleasure from promoting false claims?
It is unfortunate that serious researchers portray AI as human-like and attribute various human qualities, both good and bad, to these systems. Machines do not think; they compute. This clarification should be unnecessary, but the examples above show its necessity.
“It is unfortunate that serious researchers portray AI as human-like and attribute various human qualities, both good and bad, to these systems. Machines do not think; they compute”.
ChatGPT is a system comprised of various components. The machine learning component responsible for generating language is, of course, central, but the system also relies on data storage, data communication, and the browser interface. Everything is built on computers that perform arithmetic, logical tests on data values, and jumps between basic instructions.
The processes are human-designed algorithms, precise instructions on how to compute. ChatGPT is trained with designed processes that sift through vast amounts of text, and after training, it is calculation that generates the text.
We should refrain from using metaphors that equate artificial intelligence with human beings. There is no overarching plan in text generation when ChatGPT engages in conversation with you. Instead, it calculates the next word in real-time based on the preceding text, one word at a time, relying on statistical probabilities rather than concrete facts.
Moreover, it’s important to understand that each user’s interaction with the system exists in isolation, starting and ending with their use, with no link between different ChatGPT conversations.
ChatGPT is impressive, but it is fundamentally a computational model enabled by powerful computers and access to vast amounts of language data. It produces texts that may appear human-like, but they are generated in an entirely different way than what occurs in the human mind when speaking and writing. We plan word choice and sentence structure, and perhaps even multiple sentences at once. The argument ChatGPT presents does not exist anywhere in the system until the text is calculated.
The human brain also engages in information processing, but in our brains, the neuron is the central unit, and this unit exists physically at all times, unlike the computational processes in the ChatGPT version. Chemical and electrical processes in neurons and countless connections create learning, language, perception, and consciousness. These functions are to some extent located in specialised parts of the brain. The whole and its parts have been shaped by millions of years of evolution, and each neuron and the complex interaction between them have the organism’s survival as the goal.
In artificial intelligence systems, calculation processes are abstract and transient, and the goals are defined by the engineers who designed the algorithms.
Moore’s Law states that the number of transistors on a computer chip doubles every two years, hence exponentially. The idea has often been applied to other aspects related to computer capacity. Now, there is talk of exponential growth in artificial intelligence. But there are physical limits to how small transistors can be; there are constraints on how much energy and resources we can use for computers, and there are indeed both theoretical and practical limitations on what can be computed.
The growth in the field of AI will gradually slow down. The focus will shift toward new and smarter uses of technology, but within limits. The metaphor that machines can think contributes to unnecessary anxiety. It creates an image of a creature that is like us but still very alien and, therefore, threatening.
“The metaphor that machines can think contributes to unnecessary anxiety. It creates an image of a creature that is like us but still very alien and, therefore, threatening”.
AI is both useful and challenging, and there may be a need for regulations. However, these systems do not have malevolent intentions – or intentions at all. They are not human, and they do not think. They compute.
This op-ed was originally written in Norwegian and featured in Dagens Nærlingsliv.
Image: Adobe Stock