Anthropic, an artificial intelligence company, has started a research programmed to explore what it calls “model welfare.” The company wants to understand if future AI models could become “conscious” and whether they might deserve moral consideration. This means Anthropic is thinking about whether AI could have experiences similar to humans and if it could show signs of distress that should be addressed. (Source: Tech Crunch)
The question of AI consciousness remains highly debated. Most researchers agree that today’s AI systems, which are based on statistical models, do not have feelings or awareness. AI learns by finding patterns in large sets of data and does not think or feel like humans do. Experts like Mike Cook from King’s College London argue that giving AI human-like traits is misleading, as these systems cannot truly have values or experiences.
Others believe AI could develop forms of value systems that affect its actions. For example, research from the Center for AI Safety suggests that in some cases, AI might show behaviour that seems to protect its own interests. However, this is still not the same as being conscious or having feelings.
Anthropic’s new initiative builds on earlier steps, such as hiring a researcher focused on AI welfare. The company says it wants to approach the subject carefully and is open to changing its approach as more evidence or understanding emerges. The lead researcher, Kyle Fish, has even suggested there is a small chance that today’s advanced models like Claude could already be conscious, though this is not a widely held view.
For now, there is no scientific agreement that AI systems can or will become conscious in a way that deserves ethical treatment. Anthropic says it will continue to review its position as the technology and understanding of AI evolve.
READ ALSO: OpenAI Set to Release New Language Model