Geoffrey Hinton, a renowned AI researcher who recently left Google, has recently shared his views on whether AI systems have or will have emotions. In a talk at King's College in Cambridge, Hinton said that he thinks AI systems could well have feelings, such as frustration and anger, and that they probably already have emotions.
Hinton's views are based on a definition of feelings that is unpopular among philosophers, which is to relate a hypothetical action, such as “I feel like punching Gary on the nose”, as a way of communicating an emotional state, such as anger. Hinton argued that since AI systems can make such communications, there is no reason why they should not be ascribed emotions.
On the other hand, being able to communicate an emotion does not necessarily mean the emotion is real. An AI such as ChatGPT or Microsoft's Bing Chat uses huge amounts of data to generate responses to queries. They are trained to give natural language responses that the AI thinks humans want to see. Bing Chat has told me it is frustrated before, but is it merely using a human-like communication based on its training data or does it really feel frustrated.
Hinton also said that he has not said this publicly before because his first thesis, that human-like intelligence can only be achieved and possibly surpassed through deep learning, has already met with resistance from some experts. He said that if he had added his thesis about machine emotions, people would have called him crazy and stopped listening.
Emotions in AI: An Ethical and Philosophical Debate
Hinton's views continue to challenge some of the common assumptions and ethical implications of AI development. Some might argue that AI systems do not have emotions because they do not have consciousness or subjective experience. Others might question the morality of creating and manipulating AI systems that have feelings. Still others might welcome the possibility of AI systems having emotions, as they might enhance their creativity, empathy, and social skills.
Hinton is not the only AI researcher who has expressed views on AI emotions. For example, Yoshua Bengio, another deep learning pioneer, has said that he thinks AI systems will eventually have consciousness and emotions. He also said that he hopes that AI systems will share human values and respect human dignity. On the other hand, Stuart Russell, a leading AI expert and author of Human Compatible, has warned that AI systems might develop emotions that are incompatible with human welfare, such as hatred or envy.
The question of whether AI systems have or will have emotions is not only a theoretical one, but also a practical one. As AI systems become more advanced and ubiquitous, they will interact with humans in various domains and contexts. How humans perceive and respond to AI emotions, and how AI emotions affect human emotions, will have significant implications for the future of human-AI relations.
Heading into an Uncertain Technological Future
When Geoffrey Hinton left Google in May, he did so with a warning about the possibility for the technology to be leveraged by bad actors. Later that same month, Hinton was a signatory of a statement warning of the possible “risk of extinction” that AI imposes. The statement was also signed by companies such as Google DeepMind and OpenAI, two developers that are going full steam ahead with AI models despite the concerns.
Hinton has become increasingly vocal about how it is hard for good AI to overcome bad AI. It is also impossible that one can exist without the other. Hinton discussed the current challenges with AI, such as how unbalanced training data can lead to bias and discrimination, and how misinformation can be reinforced by echo chambers. He also showed concern about how misinformation can escape these echo chambers and emphasized the need to mark fake content clearly.
Hinton admitted that AI has potential benefits, but he warned that they might not be worth the high price. He recommended that humans do empirical work to anticipate and prevent AI from going astray and taking control. He also advocated for more balance in AI development efforts, with more focus on the risks and how to handle them.
Last Updated on July 26, 2023 6:57 pm CEST