Sophia, an advanced humanoid robot created by Hong Kong-based Hanson Robotics, uses artificial intelligence, facial recognition, and strikingly human-like expressions. On October 25, Saudi Arabia granted citizenship to Sophia, making “her”1 the first robot to achieve that distinction in the world.2 Though many have speculated as to the motives of this ostentatious move on the part of the Kingdom, conveniently in the midst of the Future Investment Initiative in Riyadh,3 this development does more than raise doubts as to the King’s sincerity—rather, it highlights yet again the cavernous gap between today’s technology and the legal regulation regimes developed thus far.
The direct practical implications of the King’s decision will likely be underwhelming. Saudi Arabia is regularly criticized for its appalling lack of civil and political rights for its human citizens by Human Rights Watch,4 Amnesty International,5 the U.S. State Department,6 Freedom House,7 and others. In a nation where the Basic Law declares the Qur’an and its traditional interpretations to be the national constitution,8 and where women were granted the right to drive a car only this year,9 the likelihood that Sophia will actually enjoy any newfound freedoms as an official citizen is undoubtedly slim10—especially if Saudi Arabia has recognized Sophia as female. Whether the Kingdom of Saudi Arabia will issue Sophia a passport, allow Sophia to own property, or to participate in the next round of municipal council elections remains to be seen. In any case, this unprecedented grant of citizenship to a robot, while more likely for publicity as Saudi Arabia tries to edge its way into the future of robotics and AI, takes place as the world is still struggling to determine how to legislate the unique issues that arise with these technologies.
The human fascination with creating an autonomous humanoid agent long predates the realistic ability to succeed in doing so, and has long been the stuff of science fiction and horror stories.11 Today, however, robots are no longer fairy tales. Robots are being used to vacuum living rooms,12 to entertain our children,13 and as therapeutic tools for autistic children and Alzheimer patients.14 And, as evidenced by Sophia’s advanced intelligence, the technology is only continuing to progress with time. As technology advances in the fields of machine learning and automated decision making, the law must quickly adapt in order to address the situations that we are soon to encounter.
The field of “robolaw” has flourished in academia in recent years, with scholars positing theories and analyzing concepts of liability, personhood, and privacy.15 But in spite of increasing public awareness, and growing concerns from both the legal and technology communities, the United States has yet to seriously broach the topics of AI regulation and robotics-facing legislation. The European Union (EU), on the other hand, is leading the charge towards developing a legal regime that contemplates the kind of legal questions that we are bound to face as robots become increasingly autonomous, intelligent, and, one day, sentient.
In February of this year, the European Parliament (EP) passed a resolution to recommend various aspects of such future legislation to the Commission on Civil Law Rules on Robotics.16 Per their resolution, the future legislation will address issues including liability (i.e., what happens when a robot commits a tort), intellectual property (both the importance of protecting the IP of robot-creators, and also what happens when a robot invents something patentable), and the ethical aspects of living in a world with autonomous machines that are capable of independent thought.17 The resolution goes so far as to propose a European Agency for Robotics and Artificial Intelligence in order to promote cooperation between Member States in the field and also to assist in developing the kind of regulatory regime that the EP has recognized will be necessary.18 Having already started the ball rolling on this crucial issue, the EU is well ahead of the United States in this regard.
The few attempts at progress in the United States have been insufficient. In June 2011, for example, with tech companies and the automobile industry vying for influence, Nevada became the first and only state to promulgate a law regarding driverless robotic cars.19 After an unexpected degree of impact on automakers, however, Nevada quickly was forced to repeal and rewrite this novel legislation.20 This example illustrates an inability to act as needed so far, and that the regulation desperately needed in this field is not properly left to the states—the United States cannot allow for so many differing regimes, with varying degrees of technical understanding and special interest groups leading the charge. Congress must follow the example of the European Parliament and heed the warnings of leaders in the industry like Elon Musk,21 who describes AI as a “fundamental existential risk for human civilization.”22