There is a risk that one day artificial intelligence will become smarter than us. Maybe one day AI will be able to manipulate humans into doing what the AI wants. This, of course, is not realistic in the near future, but doubts concern everyone.
We often treat or talk about AI as if they were people. If we stop doing that and recognize them for who they are, it might help us have a better relationship with technology.
It is in no way advisable to treat AI as humans. By AI we mean big language models, like ChatGPT and Google Bard, which are now used by millions of people every day.
People attribute human-like cognitive abilities to artificial intelligence. We must stop treating AI as humans, as conscious moral agents with interests, hopes, and desires. However, many will find this difficult or nearly impossible. Because LLMs were designed by humans to interact with us as if they were humans, we were designed by biological evolution to interact with them in the same way.
The reason why LLM holders can convincingly imitate human conversation stems from one of the profound insights of computing pioneer Alan Turing, who realized that a computer didn’t need to understand an algorithm in order to run it. This means that although ChatGPT can generate paragraphs full of emotional language, it does not understand a single word in the sentences it creates.
LLM designers have succeeded in turning the problem of semantics—the order of words that give meaning—into statistics by matching words based on the frequency of their previous use.
We tend to anthropomorphize non-human species and other inanimate objects.
But it can be useful, of course. For example, if we want to beat a computer at chess, the best strategy is to treat it as a real opponent who wants to beat us. We can talk about a tree in the forest as a tree that wants to grow toward the light. But neither the tree nor the chess computer represent the will or reasons we suppose. However, for us, their behavior can be better explained by treating them as if they were people.
Our evolutionary history has provided us with mechanisms that lead us to assume intentionality and agency everywhere. In prehistoric times, these mechanisms helped our ancestors avoid predators and develop altruism At our expense Help – to their closest relatives. The same mechanisms make us see faces in clouds and anthropomorphize inanimate objects. We don’t get into trouble if we mistake a tree for a bear, but a lot of problems can happen the other way around.
Evolutionary psychology shows that we always try to explain anything that might be human as human.
Because of the potential confusion that LLMs may cause, we have to realize that they are merely probabilistic automatons who have no intentions and do not care about humans. We need to be particularly vigilant about our language when describing human-like achievements in MBAs and AI in general.
Here are two examples.
- The first is a recent study showing that ChatGPT provided more empathetic and better-quality answers to patients’ questions than to doctors’ questions. The use of emotional words like “empathy” indicates to an AI that it has the ability to truly think and care about others, which it does not possess.
- Our second example, when GPT-4 (the latest version of ChatGPT technology) was recently released, brought greater capabilities in terms of creativity and thinking. However, we simply see the magnitude of the “efficiency”.
Many people suggested that texts and opinions written with the help of artificial intelligence should be watermarked so that there is no doubt whether we are dealing with a human or a chatbot.
AI regulation follows innovation, as it often does in other areas of life. There are more problems than solutions, and you are likely to close the gap first It grows, such as narrowing. But in the meantime, it would be foolish if our best remedy for this innate compulsion was to treat AI as a human.