As AI systems become more widespread and integrated into human life, their actions increasingly intersect with legal matters. The fundamental question beneath these diverse issues is whether the legal system should grant legal personhood to AI. Having the legal status of a person means possessing legal rights and obligations within a legal framework. Although there would be clear benefits in granting AI legal rights, I believe that it is unnecessary to consider AI as a legal person due to the clear limitations and side effects that might bring darkness to the blind spots and dilemmas within the legal system.

Granting legal personhood to AI has the potential danger of allowing AI to overpower humans. The Korean Civil Act stipulates that all persons shall be subject to rights and duties throughout their lives. This article implies that humans’ legal rights become valid when they are born and perish when they die. Although the states of birth and death are apparent for humans, setting the start and end of the legal rights of AI is a challenging task. For instance, humans are considered dead when their hearts and brains stop functioning, but it is uncertain how death should be defined for AI, which can replicate and make variations of itself even after it stops working. The potentially eternal duration of AI’s legal rights may provoke ethical problems of AI being in a superior position to humans and violate the Golden Rule that AI should support humans with respect. If the definitions in legal positions and requisites of law are not thoroughly examined, hastily granting legal personhood could hinder meaningful discourse on this issue.

A major motivation for arguments that advocate for granting legal personhood to AI is to enable AI to possess property and thus be held liable for damages related to the property. Who should be responsible for the loss arising from actions involving AI? The answer is admittedly unclear, as culpability is difficult to assign to a single entity when AI technology involves many others behind the scenes, including the AI developer and AI manager. Proponents expect the legal implications of AI decision-making to become more explicit, and the assignment of responsibility for illegal actions to be clearer when AI obtains legal rights.

However, the legal personhood of AI may be abused as a means for AI developers or managers to evade accountability. For example, when biased AI provoked problems of being discriminatory towards a certain race or gender, such as Amazon’s AI recruiting algorithm that showed bias against women in 2018, developers and companies hesitated to accept responsibility. They were reluctant to acknowledge their failure to address the inherent human bias embedded in the datasets used for AI machine learning. Considering that various actors are involved in AI production, it is more crucial to impose responsibility on them to enhance transparency and fairness in the AI development process, rather than attributing responsibility for every problem directly to the AI itself.

In conclusion, the question of whether AI should be endowed with legal personhood raises complex considerations in the context of law and ethics. AI should not replace humans, and laws and ethics should be constructed to ensure that AI can be used for the public good. Granting granting legal personhood to AI is not the best legal decision to promote the responsible use of AI technology and to make AI respect and protect human rights. Revising existing laws to enhance its suitability for AI legal problems and facilitating cooperation among people who play an important part in AI should be prioritized to constructively tackle these challenges.

저작권자 © The Granite Tower 무단전재 및 재배포 금지