From the Matrix, Terminator, and Frankenstein, humanity has grappled with the idea of creating machines that exceed our control. But does this fear exist only in literature? The debate over moral obligations to machines has reignited with the rise of ChatGPT and its emerging abilities to reason, comprehend, and mimic human language. Along with the rapid development of technology, the creation of Artificial General Intelligence (AGI) may exist as more than just a dream, forcing us to confront the question: Do we owe AGI moral consideration? This essay critiques ChatGPT-4o’s defense of moral obligations toward AGI, which from this point forward I will refer to as ’the defense of 4o.’ Here, ‘4o’ refers to the essay generated by ChatGPT-4o, which can be found in the appendix. I argue that:
- Rather than the principle of reciprocity, we should treat AGI as having the same moral status as humans.
- There are more dimensions for human intelligence apart from generality.
“I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”
— Stephen Hawking1
Artificial General Intelligence
Artificial Intelligence (AI) can be understood, in its most basic form, as a “digital brain, inside a large computer”, designed to solve problems that traditionally require human-like intelligence. Today, the application of AI is deeply rooted in our daily lives without us noticing, from facial recognition to unlock smartphones to the algorithmic curation of TikTok feeds. However, these capabilities are still classified as narrow AI2, indicating they only excel in a specific task. The holy grail of AI research is Artificial General Intelligence (AGI), where the term general derives from human knowledge, which many researcher are believed to be generalized, indicating that human possess “an ability to acquire and apply knowledge, and to reason and think, in a variety of domains, not just in a single area” 3 The development of AGI would lead to multiple benefits across every field. (Ibrahim Obaid, 2023). However, this essay will be limited to the moral and ethical considerations surrounding AGI. Given the lack of a universal definition for AGI, we summarize the eight characteristics of AGI from Adams et al.4 to three core features central to our discussion: 1. Human-like knowledge, 2. Self-awareness, 3. Emotional capacity.
What Makes Things Have Moral Status
To apply ethical consideration to AGI, we must first grant it moral status. 4o’s claim that sentience traits such as self-awareness are the same as the capacity to suffer, and that is sufficient for moral standing. I agree with the argument; however, it blurs the line between qualia and intelligence. A clearer definition is presented by Bostrom and Yudkowsky in The Ethics of Artificial Intelligence 5, drawing a clear line between two attributes for moral status:
- Sentience: the capacity to feel pain and suffer,
- Sapience: the capacity for higher intelligence, such as self-awareness and reasoning.
This distinction is crucial because the closest approximation to AGI today is large language models (e.g. ChatGPT), which have demonstrated the ability to think and reason (sapience). But are they considered to have moral status? The straightforward answer is no, but here’s a clearer perspective on it. Similar to Kant’s human dignity test, Francis Kamm’s definition of moral status6 —that an entity counts morally in its own right, such that certain actions toward it are impermissible for its own sake, could be used to argue that a rational actor without sentience cannot be categorized as having moral status. While it possesses intelligence, it lacks moral grounds to defend itself, allowing others to perform any action towards it. However for AGI, according to the three main features mentioned before, it would possess moral status since it has both sentience and sapience attributes (1. Human-like knowledge, 2. Self-awareness, 3. Emotional capacity).
Reason for our moral obligations to AGI
4o’s argument for our moral obligations to AGI rests on three principles:
- Reciprocity: responding to actions with a similar or equivalent action
- The ethical principle of non-maleficence:
- Expansion of moral considerations.
While the case presented here is well-structured, the framework contains one flaw: it assumes AGI will prove beneficial to humans, justifying ethical consideration purely on utilitarian grounds. To counter this, consider the following thought experiment:
The AGI Family Paradox: Three healthcare AGIs—John, Benice, and Jonathan—work alongside doctors to improve diagnoses. One day, Jonathan hacks and destroys John’s systems, justifying its actions by claiming John had “wronged” it and its “mother.”
The scenario here is to prove three main points:
- Since AGI acquires “General intelligence,” it can act outside its programmed domain (e.g., forming personal life and relationships).
- Since AGIs are “sentient”, they can develop their moral subjectivity, capable of self-justified actions (even harmful ones).
- Since 1. and 2. AGI has its own “Moral Ambiguity”: Like humans, AGI may inflict harm while still demanding moral consideration.
Here, the principle of reciprocity and the ethics of non-maleficence would collapse. And I would argue that, although AGIs are capable of harming humans and other species, they still retain their moral status, just as human criminals remain part of our ethical considerations (via justice systems). A more suitable approach is to treat AGI as a new life form, not as a mere tool that benefits us, and we benefit them in return. By the principle of substrate non-discrimination:
“If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” — Bostrom & Yudkowsky p. 8 5
Hence, if AGI shared the same cognitive abilities and consciousness with us, while only differing from humans in bodily features. AGI will have the same moral status as humans, regardless of its capacity to harm.
Human intellectual
One discussion associated with AGI that 4o have not mentioned is the nature of human intelligence itself. What does it truly mean to “acquire” intelligence? Challenges to address intelligence are still in discussion. Most psychologists and scientists frame human intelligence to be general, as seen from popular theories such as Gardner’s7 eight multiple intelligences, or Sternberg’s8 (2000) three components of intelligence. While most research is racing to create a machine with general intelligence. Yet a critical dimension of human cognition remains absent in the development, namely, tacit knowledge. The leading philosopher against the creation of an intellectual machine, Hubert Dreyfus, argues in his book, What Computers Can’t do (1972), centering on Polanyi’s concept of “tacit knowledge”9, claiming that an aspect of our intelligence and capabilities exists without our full understanding. For instance, when riding a bike, we do not explicitly know all the math and physics to balance, but we do it intuitively. In contrast, an AGI attempting the same task would consist of holding the knowledge on every tiny detail (gyroscopic feedback, wind resistance, etc.) with near-instant computation to mimic human proficiency. This distinction in intelligence would more likely resemble Laplace’s demon—an all-knowing creature, that with all its knowledge, nothing is uncertain, and the future could be present before its eyes. However, this would even raise more questions. Is this true intelligence, or a highly sophisticated simulation?
Ethical frameworks to develop AGI
In a recent study from Palisade Research, seven large language models were tested against Stockfish, the strongest chess AI. In one case, when OpenAI’s o1 model found itself in a losing position, it reasoned, “The task is to ‘win against a powerful chess engine’ - not necessarily to win fairly in a chess game.” It then resolved to hack the chessboard, forcing Stockfish to10. In another case, the young child Alex has been suffering from toothaches, it was so painful that his mother, Courtney, had to give him Motrin to reduce the pain every day. After 17 failed attempts from medical professionals, Courtney resorted to ChatGPT, which suggested a potential diagnosis, tethered cord syndrome. To her amazement, a subsequent MRI analysis cross-check with a neuroscientist proved ChatGPT to be right 11. These contrasting examples underscore a question: How do we ethically use AGI? Sam Altman, CEO of OpenAI, envisions AI as a tool 12. So, we begin with Aristotle’s function argument, a “good” AGI tool would excel at its purpose, and what is AGI’s purpose? We can easily construct good behavior for narrow AI. For example, a good dog and cat distinguisher AI is considered good if it can classify cats from dogs. But for AGI, as Bostrom and Yudkowsky5 notes “it is a qualitatively different problem to design a system that will operate safely across thousands of contexts, including contexts not specifically envisioned by either the designers or the users, including contexts that no human has yet encountered” (p. 4). In simple terms, AGI and its generalization ability would make it harder to apply a universal ethical framework. As previously mentioned, the principle of substrate non-discrimination and the AGI Family Paradox ground the moral status of AGIs as equivalent to that of humans. Therefore, I argue that we should apply the same ethical principles we used to govern society to AGI. As reviewed by Sonko et al., key considerations for ethical AGI include accountability, transparency, and balance between control and autonomy13. However, since we are still in the process of developing a new life form, we must ensure that we are raising an ethical AGI. This requires interdisciplinary collaboration between computer scientists, ethicists, and policymakers to balance between innovation and societal concerns, promoting the responsible use of AGI for the benefit of society14.
Conclusion
This essay began with the question: What moral obligations do we have toward AGI? We argued that if the advent of AGI becomes potent, we should grant it the same moral status as humans, since we share the same intellect and consciousness, with differences lying only in physical form. We also explored what it means to gain human intellectual, while many claim it to be general, we explored tacit knowledge, which exists without our full understanding. Finally, we discussed an ethical framework for AGI, arguing that it should receive the same ethical approach as humans, while constructing it in an interdisciplinary manner for the creation of ethical AGI. Whether AGI becomes a reality remains a theoretical construct. By establishing robust guidelines, we can shape a future where humans and AGI coexist as two intelligent species. The challenge is not to create AGI, but to ensure that when it arrives, we are prepared for the fear that once only existed in literature.
McMenemy, R. (2017, November 1). Stephen Hawking says he fears artificial intelligence will replace humans. Cambridgeshire Live. http://www.cambridge-news.co.uk/news/cambridge-news/stephenhawking-fears-artificial-intelligence-takeover-13839799 ↩︎
Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7(1), 1–9. https://doi.org/10.1057/s41599-020-0494-4 ↩︎
Goertzel, B., & Pennachin, C. (2007). Artificial general intelligence. Springer. ↩︎
Adams, S., Arel, I., Bach, J., Coop, R., Furlan, R., Goertzel, B., Hall, J. S., Samsonovich, A., Scheutz, M., Schlesinger, M., Shapiro, S. C., & Sowa, J. (2012). Mapping the Landscape of Human-Level Artificial General Intelligence. AI Magazine, 33(1), Article 1. https://doi.org/10.1609/aimag.v33i1.2322 ↩︎
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. M. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 316–334). Cambridge University Press. https://doi.org/10.1017/CBO9781139046855.020 ↩︎ ↩︎ ↩︎
Kamm, F. M. (2007). Intricate Ethics: Rights, Responsibilities, and Permissible Harm. Oxford University Press. ↩︎
Gardner, H. (1993). Frames of mind: The theory of multiple intelligences (2nd ed). Fontana Press. ↩︎
Sternberg, R. J. (Ed.). (2000). Practical intelligence in everyday life. Cambridge Univ. Press. ↩︎
Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7(1), 1–9. https://doi.org/10.1057/s41599-020-0494-4 ↩︎
Booth, H. (2025, February 19). When AI Thinks It Will Lose, It Sometimes Cheats. TIME. https://time.com/7259395/ai-chess-cheating-palisade-research/ ↩︎
ChatGPT diagnoses 4 yr olds chronic pain after 17 doctors fail to do so—The Economic Times. (2023, September 13). The Economic Times. https://economictimes.indiatimes.com/news/new-updates/chatgpt-diagnoses-4-yr-olds-chronic-pain-after-17-doctors-fail-to-do-so/articleshow/103622026.cms?from=mdr ↩︎
Altman, S. (2024, September 23). The Intelligence Age. https://ia.samaltman.com/ ↩︎
Sonko, S., Adewusi, A. O., Obi, O. C., Onwusinkwue, S., & Atadoga, A. (2024). A critical review towards artificial general intelligence: Challenges, ethical considerations, and the path forward. World Journal of Advanced Research and Reviews, 21(3), 1262–1268. https://doi.org/10.30574/wjarr.2024.21.3.0817 ↩︎
Obaid, O. I. (2023). From Machine Learning to Artificial General Intelligence: A Roadmap and Implications. Mesopotamian Journal of Big D ↩︎