top of page

ARE ROBOTS PEOPLE? GIVING ARTIFICIAL INTELLIGENCE PERSONHOOD



The debate over artificial intelligence and its usefulness within our society is heating up. So much so, that the European nation of Estonia, bordering Russia, is considering the legal status of artificial intelligence. Estonia’s Ministry of Economic Affairs proposes that artificial intelligence could be given a legal status somewhere between property and personhood, potentially leading to a classification of artificial intelligence as agents, where issues of liability can be explored.[1]


Still, the need to determine the legal status of artificial intelligence is rapidly approaching. On February 16th 2017, the European Parliament passed a resolution with recommendations to the European Commission on Civil Law Rules on Robotics. The recommendations introduce normative principles used to govern artificial intelligence, robots, and the engineers that create them including: Beneficence, Non-Maleficence, Autonomy, and Justice.


In governing and giving legal status to artificial intelligence, these normative principles could be expounded to ensure accountability and transparency when artificial intelligence or robots cause harm to people or property. The principle of Beneficence includes that robots should act in the best interest of humans. Non-Maleficence includes that robots should not harm a human. The principle of Autonomy includes that humans should have the capacity to make informed and un-coerced decisions concerning their interactions with robots, and finally, the principle of Justice includes that there should be a fair distribution of the benefits of artificial intelligence, particularly where the affordability of homecare and healthcare are at issue.[2]


Far ahead of other legislators, The European Parliament is currently drafting regulations to govern artificial intelligence and proposes that artificial intelligence be considered an “electronic person.”[3] Similar to corporate personhood, an electronic person could be given a legal status somewhere between person and property, depending on the capabilities of the artificial intelligence. The need to create a legal status for artificial intelligence and robots is clear, given that issues surrounding civil liability are increasingly apparent.


Liability issues arise when artificial intelligence is introduced into the equation, where issues surrounding privacy, contract, and tort law are more obvious. For instance, how does the law apportion responsibility for harmful acts caused by a robot? How does the law apportion responsibility for artificial intelligence that enters a contract or is designated as a 3rd party beneficiary? How does the law apportion responsibility for breaches to fundamental rights by non-human actors? Fortunately, the U.S. may have existing legal frameworks that can address liability issues, with a bit of updating. For instance, proposed liability models include, various ownership concepts in Copyright Law [4] or liability models in Partnership and Agency Law, where liability exist between principal and agent, based on the authorization to act.

Read more here:


THE REWIRE: 

 

The Rewire is where technology and law merge to deliver a glimpse into the world of tomorrow. It is a place to find how the most recent technological advances are shaping our ever-evolving regulatory landscape. Technology is continuously changing how we interact with the world and The Rewire offers a unique perspective that shows the dance between innovation, the legal system, and commerce. The Rewire ensures to keep its readers entertained and among the sharpest and most informed at any gathering. 

 RECENT POSTS: 
 FOLLOW THE REWIRE: 
  • Facebook B&W
  • Twitter B&W
  • Instagram B&W
Future EVENTS:
The Future of Money & technology summit
December 4, 2017, San Francisco, CA.
 
RE*WORK Deep Learning & AI Assistance
January 25-26, 2018, San Francisco, CA.
 
TechCrunch's Disrupt SF
September 5-7, 2018, San Francisco, CA.
bottom of page