November 01, 2017

Artificial Intelligence and the Law : More Questions Than Answers?

By Kay Firth-Butterfield

Artificial intelligence (AI) will be everywhere. It will ensure our world runs smoothly and our every need is met. In its not very intelligent form, it is here already in our cars, smartphones, search engines, and translation and personal assistants; in our homes in the form of robot cleaners and lawnmowers; on the street helping with surveillance, traffic monitoring, and policing; and even in condoms and sex dolls—the list is extensive and growing. AI beats us at the game of Go, even creating moves that we humans have failed to notice in hundreds of years of play; and it wins at poker, a game that requires players to perfect the art of bluff.

Just around the corner are AI-enabled dolls to become our children’s real imaginary friends, as well as fully autonomous cars. It is worth noting that although John McCarthy first suggested the idea of autonomous cars as a scientific possibility in the 1960s, the technology has only made enormous strides to make this possible in the last few years.

And yet Vint Cerf, the “father of the Internet,” describes AI not as intelligent but as an “artificial idiot.”1 The prize-winning Go-playing computer has no idea it is playing Go. However, AI is excellent at learning, and with enough data to train it and some often, basic instruction, it can learn a new skill—for example, sorting through all the documents in a case and make decisions about discovery, or helping a doctor to diagnose cancer.

What Is Artificial Intelligence?

To understand, we have to realize that AI is not one technology but a range of techniques that give the appearance of intelligence. AI is applied math and statistics at their very best. Techniques such as reinforcement learning, neural nets, deep learning, and more are driving the AI revolution, but they are not—and seem nowhere near—artificial general intelligence (AGI). AGI will be achieved when a computer can perform all the same intellectual activities as a human.

For lawyers, this lack of definition of AI is a problem. If we are unable to describe something, we cannot legislate or easily draw on the correct existing law when cases come to court in the absence of legislation. Indeed, it is certainly arguable that, as this product is continuously evolving and is unlike any product we have ever seen before, no current legal precedent could apply. Recently, Senator Maria Cantwell (Wash.) proposed a bill that would require the U.S. Department of Commerce to form a committee with an AI focus. According to GeekWire, the draft also seeks to create a federal definition of AI as “systems that think and act like humans or that are capable of unsupervised learning,” and differentiates between AGI or a system that “exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive, emotional, and social behaviors,” and “‘narrow artificial intelligence,’ such as self-driving cars or image recognition.”2 Others suggest that we should have “use case” definitions—for example, the way in which Nevada has defined AI for use in autonomous vehicles as “the use of computers and related equipment to enable a machine to duplicate or mimic the behavior of human beings.”3

Transparency

Currently, the legislation around this technology is principally concerned with data privacy and autonomous vehicles.4 In Europe, the “home of privacy,” the General Data Protection Regulation (GDPR) will come into force in 2018.5 Among other provisions, the GDPR gives a citizen of the European Union (EU) the right to demand an account of how a decision that affected them adversely was achieved. Thus, if an algorithm was used in the denial of a loan to an EU citizen, that citizen can require the loan company to explain how it came to its decision.

This presents a problem for most of the systems currently known as AI because many develop their decisions within what is termed a “black box.” This is not the informative black box flight recorder but rather an opaque one where AI algorithms crunch data to achieve answers. In other words, the scientist feeds data into the computer and the computer uses many iterations of questions and answers to achieve the answer for which it was trained. Using the poker-playing computer as an example, it ran millions of games against itself using three computers powered by supercomputers to achieve its victory. With enough data, computers have the speed, discipline, and endurance to do complex tasks; however, the scientists who design them more often than not do not know how the computers achieve the answer. For the GDPR to work, developers of AI will have to make these systems transparent.

Take, for example, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) developed to assist U.S. judges in sentencing. Under the GDPR, defendants who wish to challenge the fairness of their sentences would ask to see how the computer arrived at its decision. According to a ProPublica study, in the COMPAS model the computer is trained on historic criminal justice data, which may lead to corruption of the data and the subsequent decision—creating racially biased decisions because the historic data encompassed them.6 In State v. Loomis, the judge used the COMPAS tool to assist with sentencing.7 The Wisconsin Supreme Court rejected Loomis’s appeal, saying he would have received the same sentence whether or not the AI was involved.8 However, the court seemed concerned about the use of COMPAS. Chief Justice Roberts was likewise concerned, when in response to a question regarding AI in the courts he said that AI is already in courtrooms “and it’s putting a significant strain on how the judiciary goes about doing things.”9 However, a few months later, the U.S. Supreme Court decided not to hear the writ of certiorari of the Loomis appeal. The question of bias does not end in the way data is collected or cleaned or how it is used in AI—it also comes from the scientists themselves when they are training the AI algorithm. Some suggest that Alexa only comes with a female voice because it was programmed by predominantly white male geeks.

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems has recommended a new IEEE standard to deal with this problem of transparency.10 P7001 is in the working group phase at the moment and argues that diverse stakeholders need transparency and that transparency is essential to the way we should design embodied AI—for example, autonomous cars and robots. It is argued that accident investigators, lawyers, and the general public need to know what the car or household robot was doing at the time of an accident in order to allocate blame and damages and, most importantly, instill trust in the technology. However, some scientists consider the task of transparency too difficult to achieve. It is their view that, as human beings, we cannot hope to understand complex AI algorithms and thus transparency is illusory because even if we can see what the system is doing we cannot understand it. Instead of human regulation they propose regulation by algorithm. Thus, a car would have standard algorithms to deal with its operation and a “guardian” algorithm to make sure it stays within its set parameters. By way of example, as data is continually collected by the standard algorithm about road users so that it can improve the safety and reliability of driving, the guardian AI would prevent the standard algorithms learning to speed from their collected data about the habits of the human driver. The remaining unsolved question is who will guard the guardians?

Undoubtedly, opaque and transparent systems raise intellectual property (IP), copyright, and patent issues, which will have to be reconciled with legislation or in the courts.

Privacy Concerns

Nor does the problem of data end here. For our current AIs to work they need massive data sets, which is why the Economist called data the new oil.11 If you have data you can create AI, and therefore everything we do is of value to someone; collection and sale of our data is big business. With devices that listen and observe in our home, a once private place has lost its privacy. Some of these devices listen and record and store those recordings the whole time, while others, like Alexa, listen for key “wake up” words. In November 2015, a murder occurred at the home of James Bates. He was accused of the murder, and the prosecutor asked Amazon for any recordings created by Alexa at the time of the death. Amazon refused saying, “Given the important First Amendment and privacy implications at stake, the warrant should be quashed unless the Court finds that the State has met its heightened burden for compelled production of such materials.”12 However, Bates’s attorney subsequently obtained copies of recordings from Amazon and released them into evidence. Therefore, this important legal issue has yet to receive a decision.

Additional concerns about privacy are applicable to children. Article 16 of the United Nations Convention on the Rights of the Child gives children a right to privacy.13 It is difficult to exercise that right, once you have sufficient mental capacity to do so, if your parents—by having devices that listen and record in your home from your birth—have given away your childhood privacy. Indeed, a child might soon have its own monitoring device—an AI-enabled doll to talk to and learn from. Parents or legislators need to be making choices as to what these dolls upload to the cloud or teach their children and, perhaps more importantly, as to their cybersecurity protocol. It is unclear if this will be an option offered by the manufacturers or whether most parents have sufficient knowledge to understand the problem. To prove this point, in the United Kingdom the Purple company recently included a requirement for users of its free Wi-Fi to clean toilets for 1,000 hours; of the thousands who logged on, only one person read the terms and conditions in which this was included.14 It is for this reason that the German government banned “Cayla,” an AI-enabled doll, earlier this year, and in their guidance for autonomous car makers said this about data collected in cars:

It is the vehicle keepers and vehicle users who decide whether their vehicle data that are generated are to be forwarded and used. The voluntary nature of such data disclosure presupposes the existence of serious alternatives and practicability. Action should be taken at an early stage to counter a normative force of the factual, such as that prevailing in the case of data access by the operators of search engines or social networks.15

Another important privacy question is the development of sex robots enabled with AI. These robots will also need to collect data, and that data has to be stored somewhere. The possibility of having one’s most intimate secrets hacked must be high, but the real question has to be whether, as demand is principally for female sex robots, this is just another way of continuing sexual assault on women. As we move into the robot age, we may have the opportunity to end the “oldest profession” or perhaps simply enable it to metamorphose. This question becomes all the more pressing when thinking about an AI-enabled child sex doll, which is being produced for pedophiles in Japan. Their developer argues that it stops him, and others, from assaulting human children. However, the importation of this sort of object in the United Kingdom would probably be a crime; a defendant was recently convicted of trying to import a non-AI-enabled sex robot.16 The Foundation for Responsible Robotics recently published a neutral report in an effort to start these conversations.17

Regulating AI

Regulation is often said to stifle innovation, but regulation in this space seems necessary to protect the millions of customers who will buy and use AI-enabled devices. However, it seems unlikely that AI, other than autonomous vehicles, will find its way onto the federal legislative agenda anytime soon. The Kenan Institute of Ethics at Duke University has been considering the idea of “adaptive regulation,” which would involve passing a regulation geared toward a specific emerging technology so that developers and users could have some security to guide investment, and revisiting the regulation at an early stage to ensure it was working.

There are a number of efforts to create guidelines for the use of AI. Some initiatives are from industry—for example, IBM has published ethical use guidelines and helped to create the Partnership on AI.18 Additionally, nonprofits such as AI Global (groupings of geographically localized academics, industry, and government; e.g., AI Austin) and the Future of Life Institute have created guiding principles for the design, development, and use of AI.19 Additionally, as mentioned, the IEEE has brought some 200 experts together to create standards applicable to work with AI and robotics. In the United Kingdom, there is a British Standard for Robots and Robotic Devices (BS 8611) that provides a guide to the ethical design and application of robots and robotic systems.

However, it seems that the bulk of the law regarding AI will come from judicial decision making, although there may be some regulators who already exist and who could find AI falling within their purview. In a hyper-connected world, we have already seen that cybersecurity is vital. As we extend our dependency on AI, cybersecurity will become ever more vital as AI—better able to adapt to threats faster than humans—will also run the cybersecurity systems. Developers have been using game theory to help teach algorithms about strategic defense. In one scenario, two standard algorithms played a game of collecting “things” but could also attempt to kill one another; they only resorted to trying to do so when there was scarcity of “things.” However, when a cleverer algorithm was introduced it immediately killed the weaker two.20 Regulatory standards can be built on existing ones, such as the U.S. National Institute of Standards and Technology (NIST) standards for cryptography. The Internet of Things (IoT) makes things much less safe as all these devices are a potential access point and many are cheaper to make than to patch. Additionally, as companies produce AI-enabled devices and then go out of business, the burden of security and safety will become greater. For example, will car makers be required to maintain the AI software throughout the lifetime of the car and multiple owners?

As to governmental endeavors in the United States, the benefits and problems of AI were considered by the Obama administration in two reports from the Office of Science and Technology, the second focusing on the growing concern that automating our brains might lead to mass unemployment.21 AI has yet to be taken up as a topic of debate by the Trump administration, but there is a bipartisan working group on AI (led by Congressmen John Delaney (Md.) and Pete Olson (Tex.)),22 and the Trump administration has voiced its support for the creation of autonomous vehicles. In the political arena, we may have seen the impact of AI on the decisions of the American public in the presidential race by the focusing of “fake news” and by the minute targeting of voters using AI to build unique profiles of voters from their public records and social media accounts. Individual security of data is often impinged because users believe they have their accounts locked to friends and family, but this is not so. A recent Cambridge University study shows that from 10 Facebook “likes” an AI can know you as well as a work colleague.23 Additionally, things become more complex as AI can seamlessly change video to insert words into the mouth of the speaker that are entirely different from what was actually said.24 Recent problems have shown that bad actors can also fool image detection AI—for example, persuading it that a kitten is a computer,25 or corrupting Microsoft’s ill-fated Tay chatbot.26

Conclusion

Those who attempt to forecast the future have three chances: to be wrong, to be right, or to be partially right. Undoubtedly, the latter course is the best one to chart. When looking at the future of AI, the rights to data will likely become an increasingly important issue, as well as how the general population learns about AI and what it can do (so that they can safely rear their children and cast their votes). Currently, there is much hype about AI and a paucity of AI scientists outside the major corporations, which could lead to another “AI winter.” This is a time of great opportunity to actually shape the way in which humanity survives into the future—we should not waste that opportunity! u

Endnotes

1. Frank Konkel, Father of the Internet: “AI Stands for Artificial Idiot, Nextgov (May 9, 2017), http://www.nextgov.com/emerging-tech/2017/05/father-internet-shows-no-love-ai-connected-devices/137697/.

2. Tom Krazit, Washington’s Sen. Cantwell Prepping Bill Calling for AI Committee, GeekWire (July 10, 2017), https:// www.geekwire.com/2017washingtons- sen-cantwell-reportedly-prepping-bill- calling-ai-committee/.

3. Nev. Rev. Stat. § 482A.020.

4. Ethics Commission Creates World’s First Initial Guidelines for Autonomous Vehicles, Germany.info (June 21, 2017), http://www.germany.info/Vertretung/usa/en/__pr/P__Wash/2017/06/21-AutonomousVehicles.html.

5. Council Regulation 2016/679, 2016 O.J. (L 119) 1 (effective May 25, 2018) [hereinafter GDPR].

6. Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

7. Adam Liptak, Sent to Prison by a Software Program’s Secret Algorithms, N.Y. Times, May 1, 2017.

8. State v. Loomis, 881 N.W.2d 749 (Wis. 2016).

9. Id.

10. See The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, IEEE Standards Ass’n, https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html (last visited Oct. 17, 2017).

11. The World’s Most Valuable Resource Is No Longer Oil, but Data, Economist (May 6, 2017), https://www.economist.com/news/leaders/21721656-data-economy-demands-new-approach-antitrust-rules-worlds-most-valuable-resource.

12. Eliott C. McLaughlin, Suspect OKs Amazon to Hand Over Echo Recordings in Murder Case, CNN (Apr. 26, 2017), http://www.cnn.com/2017/03/07/tech/amazon-echo-alexa-bentonville-arkansas-murder-case/index.html.

13. Convention on the Rights of the Child art. 16, Nov. 20, 1989, 1577 U.N.T.S. 3.

14. Rachel Thompson, 22,000 People Accidentally Signed Up to Clean Toilets Because People Don’t Read Wi-Fi Terms, Mashable (July 13, 2017), http://mashable.com/2017/07/13/wifi-terms-conditions-toilets/#kireRB9KmiqJ.

15. Ethics Commission Creates World’s First Initial Guidelines for Autonomous Vehicles, supra note 4 (guideline 15).

16. Man Who Tried to Import Childlike Sex Doll to UK Is Jailed, Guardian, June 23, 2017.

17. Noel Sharkey et al., Found. for Responsible Robotics, Our Sexual Future with Robots (2017).

18. Transparency and Trust in the Cognitive Era, IBM THINK Blog (Jan. 17, 2017), https://www.ibm.com/blogs/think/2017/01/ibm-cognitive-principles/.

19. See AI Austin, https://www.ai-austin.org/ (last visited Oct. 17, 2017); Asilomar AI Principles, Future of Life, https://futureoflife.org/ai-principles/ (last visited Oct. 17, 2017).

20. Joel Z. Leibo et al., Multi-Agent Reinforcement Learning in Sequential Social Dilemmas, in Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AA-MAS 2017) (S. Das et al. eds., 2017), https://storage.googleapis.com/deepmind-media/papers/multi-agent-rl-in-ssd.pdf.

21. Kristin Lee, Artificial Intelligence, Automation, and the Economy, Obama White House (Dec. 20, 2016), https://obamawhitehouse.archives.gov/blog/2016/12/20/artificial-intelligence-automation-and-economy.

22. Press Release, Congressman John Delaney, Delaney Launches Bipartisan Artificial Intelligence (AI) Caucus for 115th Congress (May 24, 2017), https://delaney.house.gov/news/press-releases/delaney-launches-bipartisan-artificial-intelligence-ai-caucus-for-115th-congress.

23. Computers Using Digital Footprints Are Better Judges of Personality than Friends and Family, Univ. of Cambridge (Jan. 12, 2015), http://www.cam.ac.uk/research/news/computers-using-digital-footprints-are-better-judges-of-personality-than-friends-and-family.

24. Meg Miller, Watch This Video of Obama—It’s the Future of Fake News, Co.Design (July 18, 2017), https://www.fastcodesign.com/90133566/watch-this-video-an-ai-created-of-obama-its-the-future-of-fake-news.

25. Anish Athalye, Robust Adversarial Examples, OpenAI (July 17, 2017), https://blog.openai.com/robust-adversarial-inputs/.

26. James Vincent, Twitter Taught Microsoft’s AI Chatbot to Be a Racist Asshole in Less than a Day, Verge (Mar. 24, 2016), https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.

Entity:
Topic:

By Kay Firth-Butterfield

Kay Firth-Butterfield, LLM, MA, FRSA (kay.firth-butterfield@weforum.org) is a barrister in the United Kingdom and the Project Head of AI and ML at the World Economic Forum. She is an associate fellow of the Centre for the Future of Intelligence at the University of Cambridge and a senior fellow and distinguished scholar at the Robert S. Strauss Center for International Security and Law, University of Texas, Austin; cofounder of the Consortium on Law and Ethics of A.I. and Robotics, University of Texas, Austin; and vice-chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She has taught courses on law and policy of emerging technologies and AI at the University of Texas Law School in Austin.