January 17, 2020 Feature

Automated Personal Assistants with Multiple Principals: Whose Agent Is It?

By David K. A. Mordecai, PhD

The term automated personal assistant (i.e., virtual assistant) commonly refers to mobile software agents that perform tasks, or services, on behalf of an individual (i.e., the device user or application user) based on a combination of user input, location awareness, and the ability to access information from a variety of online sources (e.g., weather conditions, traffic congestion, news, stock prices, user schedules, retail prices, etc.).

In the computer science technical literature, the term agent refers to a broad range of technologies and a corresponding research domain within the field of artificial intelligence (AI) primarily focused on autonomous information processing programs. In the AI field, agents are defined as software applications that act on behalf of a user to meet certain objectives or complete tasks with de minimis direct input or supervision from the user. Other definitions of agents describe computational systems that sense and act autonomously in some complex dynamic environment to realize a set of goals or tasks.1

Essentially, software agents are purposive, although the degree of autonomy and sophistication of these software agents vary, and accordingly the degrees of dependency upon active supervision from either the user or one or more vendors technically supporting the service bundle underlying the agent (i.e., operator or custodian) also vary. These vendors often have incentives independent or conflicting with those of the device user. This article highlights a few open questions and foundational principles relevant to contract and tort liability implications of software agency in this context.

Intelligent Automated Assistants vs. Smart Personal Agents

Conceptually, automated personal assistants tend to be classified into two general types: (1) intelligent automated assistants (e.g., Alexa, Cortana, and Siri), which perform concierge-type tasks (e.g., making dinner reservations, purchasing event tickets, and making travel arrangements) or provide information based on voice input or commands; and (2) “smart” personal agents, which automatically perform management or data-handling tasks based on online information and events often without user initiation or interaction. These services are commonly delivered via mobile devices, Bluetooth-connected accessories, or other voice-controlled Wi-Fi-connected “smart” devices, for example speakers (like the Amazon Echo series), watches, wristbands, augmented reality glasses, thermostats, security cameras, televisions, kitchen appliances, etc.2

Both types of automated personal assistant technology are enabled by the combination of mobile computing devices, application programming interfaces (APIs), and the proliferation of mobile applications. However, intelligent automated assistants are designed to perform specific, one-off tasks specified by user voice instructions, while smart personal agents perform ongoing tasks (e.g., schedule management) autonomously. In both cases, the automated personal assistant can be considered to be enacting purposive agency on behalf of one (or more) user(s).

Examples of tasks that may be performed by a smart personal agent type of automated personal assistant include the following: schedule management (e.g., sending an alert for a business dinner that the user is running late to due to traffic conditions, updating schedules for both parties, and changing the restaurant reservation time) and personal health management (e.g., monitoring caloric intake, heart rate, and exercise regimen, then making recommendations for healthy choices).3

Common Agency and the Multiple Principal Problem

Vendors providing personal assistants typically tend to have commercial incentives that may be indifferent to, unaligned with, or even contrary to those of the user. For example, precision marketing firms may have a commercial incentive to extract wealth from the user as a consumer, by engaging in targeted advertising with price discrimination in order to maximize vendor sales and profitability.4 Since automated personal assistants tend to be comprised of software-as-a-service (SaaS) bundles provided by collections of vendors and delivered via APIs and code bases assembled into toolkits commonly referred to as software development kits (SDKs), certain fundamental questions arise regarding the conflicting objectives of the user and the SaaS vendors supporting the service bundles that comprise the agency of the personal assistant. Interdisciplinary legal and economic scholarship has already been articulating the legal standing of artificial agents, as well as the status of the commercial transactions they execute, but is only just beginning to explore the cyberphysical effects of the actions by personal assistants and other autonomous software agents.

One still emergent and somewhat underdeveloped direction of inquiry regards specifying the principal-agent relationship between automated personal assistants with multiple principals. In particular, whether or not one or more of those vendors supporting the personal assistant conditionally or circumstantially assume the role of de facto agent for the user—such that the software agent is deemed to be an extension of the vendor as a principal with competing interests to those of the user—may be situation and fact specific (e.g., dependent upon the software architecture and nature of the services and corresponding transactions).

Common agency and the multiple principal problem—also referred to as the multiple accountabilities problem, i.e., serving n masters where n>2—is an extension of the principal-agent problem, in which the agent is acting in the interest of multiple persons, sometimes with inconsistent, orthogonal, or conflicting interests.5 Tradeoffs or tensions between individual (private or proprietary) interests of the parties versus their joint or social interests can result in externalities or other inefficiencies.

In circumstances in which these multiple principals have conflicting (or countervailing) interests, information asymmetry and moral hazard (i.e., any inherent incentive to risk shift) among or between the parties may become the prevailing principles for apportionment of contractual or tort liability.6 For example, apportionment of liability might be contingent upon relative attributions of knowledge across the principals, i.e., asymmetric access to information. With increased sophistication in accessing user data, the vendors deploying the agents may be attributed with knowledge of a user’s otherwise private personal information regarding that user’s preferences and interests, which is likely to result in legal liability. This differential access by a vendor to the private information of a user may be further compounded by the expectation of reliance upon the software agent by the typical user—with more limited technical knowledge regarding its configuration and function, and thereby of the degree of control of the software agent exercised by the vendor or custodian.

Since software agents may exhibit varying degrees of autonomy under control of their principals (whether owners, users, operators, or custodians), duties of care for software agents whose agency involves cyberphysical risks might range from applicable case law related to prudential use and security of dangerous instrumentalities and machines (e.g., industrial robots) to strict liability and negligence theories.

Software Agency

The law of agency is an area of commercial law dealing with a set of contractual, quasi-contractual, and noncontractual fiduciary relationships between the agent, a person authorized to act on behalf of another person, the principal, and the principal in order to create legal relations with a third party.7 Although legal reasoning is often analogical, legal theory is still wrestling with whether AI robots are most analogous to human adults, children, animals (either domesticated or wild), corporations, tools, or some other type.8

In the fundamental principal-agent relationship, the principal either expressly or implicitly authorizes the agent to act under the control and on behalf of the principal, and the agent is thereby authorized to execute contractual arrangements with third parties on behalf of the principal. This branch of law separates and regulates the relationships between: (1) agents and principals (internal relationship), known as the principal-agent relationship; (2) agents and the third parties with whom the agents interact on behalf of the principals (external relationship); and (3) principals and those third parties with which the agents contract.9 The two key conditions worth noting for a principal to grant authority to an agent are “under the control of” and “on behalf of” that principal.

While there are different types of software agents and architectures, and the law may treat them differently depending upon a range of facts and circumstances, the following principles generally apply. The reciprocal rights and liabilities between a principal and an agent reflect commercial and legal realities, in which the principal relies on the agent. The principal is bound by the contract entered into by the agent, so long as the agent performs within the scope of the agency. A third party may rely in good faith on the representation by a person who identifies himself or herself as an agent for another. It is not always reasonably feasible to verify whether an agent who is represented as having the authority to act for another actually has such authority. In general, if it is subsequently discovered that the alleged agent was acting without necessary authority, the agent will be held liable. However, in such circumstance, if the agent is software, then who is liable? If there are multiple principals, which of the principal(s) may be liable and to what degree? How is liability to be apportioned?

The type of authority software agents are deemed to possess (as conferred respectively by a particular principal) may also play a role. When an agent is acting within the scope of authority granted by a principal, the agent’s actions can bind the principal to those obligations with third parties. In some cases, the scope of agency and degree of authority attributed to automated personal assistants may be conditional upon the expectations and perceptions of third parties. Customarily, software agents have tended to be more limited to conduct a specified transaction or series of transactions and thus appear most similar to special agents.

However, with the proliferation of distributed software architectures engaging in transactions through APIs via diverse code bases by which multiple service providers also participate as stakeholders, as the degree of control migrates from the user to vendors with divergent and often competing interests, the alignment of agency, authority, and accountability across these parties is likely to become increasingly conflated. An overriding open question with software agents involves the degree to which expectations and representations indicated by either users or vendors (whether directly or indirectly) might dictate apparent authority to obligate each respective principal in a particular circumstance.10

This suggests that for service providers supporting these automated personal assistants, the degree to which the personal assistant may be deemed to be an extension of their independent or competing interests (and perhaps in conflict with the interests of the user) may be subject to the state of the system—i.e., state dependent as described in related economic, engineering, and mathematical terms—and highly fact and case specific, according to prevailing contextual conditions (either objectively situational or subjectively circumstantial). Depending upon the facts and circumstances, perhaps path dependence—i.e., the sequence of actions by the automated personal assistant as a dynamic system of overlapping interests—may be relevant to the apportionment of liability.

Consider how third-party expectations may be affected by the state dependence—further compounded by the informational asymmetry, complexity, and opacity—of personal assistants as distributed systems of bundled services. Customarily, notions of information asymmetry, ownership, control (with corresponding accountability), and authority implicate both contractual and tort liability of the agent and its principal(s) to a third party.

Where the principal cannot be bound because the agent has no actual or apparent authority, the purported agent typically becomes liable to the third party for breach of the implied warranty of authority. Yet, in those circumstances in which the agent does have either actual or apparent authority, typically the agent may not be deemed liable for acts performed within the scope of such authority, provided that both the relationship of the agency and the identity of the principal have been disclosed.

However, when the agency is undisclosed or partially disclosed, both the agent and the principal may be liable. For reasons articulated above, questions of apportionment and attribution may prevail, i.e., “which principal?” With regard to the relative attribution of accountability and corresponding apportionment liability of the principal(s) to the agent, if the agent has acted within the scope of the actual authority given, typically the principal must indemnify the agent for payments made during the course of the relationship whether the expenditure was expressly authorized or merely necessary in promoting the principal’s business. Once again, the question remains as to which principal and to what degree. Since as a foundational principle of both law and economics even bilateral contracts are inherently both state dependent and incomplete, by extension it seems reasonable that the terms of use agreement as a bilateral agreement shares those characteristics. Whether or not either actual or apparent authority on behalf of a user exists for automated personal assistants being multilateral arrangements with ad hoc arrangements remains a matter of inquiry.

The apportionment of liability of agent to (one or more) principal(s) may arise in those instances in which the agent has acted without actual authority, but the principal is nevertheless bound because the agent had apparent authority, in which case the agent is liable to indemnify the principal for any resulting loss or damage. However, the question arises once again as to the relative attribution of accountability and corresponding apportionment of liability across the principals, depending upon the conditions under which the agent is acting as an extension of a particular principal.

Contractual remedies may typically entail specific performance or other resolutions related to risk allocation consequences of the respective solutions, depending upon whether the principal is a user of the agent or an operator (i.e., service vendor) of the agent as well as the type of error precipitating the potential loss associated with the obligation.

In comparison to contract liability, depending upon the facts and circumstances, tort liability may include doctrines and principles from a diverse array of liability schemes and recovery theories related to both physical harm and economic loss, e.g., negligence, product liability, malpractice liability, etc. In addition to conventional theories of supplier liability and operator vs. user liability, the theory of tort liability may be based on liability doctrines enacted for wild and domestic animals, unpredictable actors under supervision, or extremely hazardous activities.

A cause of action in tort law based on the doctrine of negligent entrustment arises in circumstances in which one party (the entrustor) is held liable for negligence because he or she negligently provided another party (the entrustee) with a dangerous instrumentality, and the entrusted party caused injury to a third party with that instrumentality. Customarily, the cause of action most frequently arises where one party allegedly allows another party, incompetent or otherwise impaired, to operate his or her automobile. Given both the growing evidence on the fragility of machine learning (ML) algorithms, an open question might be less about the analogous question of whether an owner might entrust a minor to drive a vehicle and more about whether an owner should entrust a minor to decide who may drive the vehicle.11

Using Reliability Engineering to Mitigate Risk

Given the functional mechanisms underlying these bundled systems of distributed services, the training and control of the agent by the user and/or operator—subject to relative degrees of control and information asymmetry—is relevant to both foreseeability and causation attribution, as well as apportionment of but-for damages. Furthermore, the state of the art defense is limited by the principles of due care as dictated by generally accepted practices, principles, and standards for edge-case testing in conjunction with those associated with the discipline of reliability engineering, a subdiscipline of systems engineering that emphasizes dependability in the life cycle management of a product.12

The primary focus of reliability engineering is dependability (i.e., reliability), which describes the capability of a system or component to function under stated conditions for a specified period of time.13 Reliability engineering also is closely interrelated to safety engineering and system safety, each of which employs common methods for analysis and tends to be interdependent.14 In practice, the objectives of reliability engineering (in decreasing order of priority) are: (1) to apply engineering knowledge and specialist techniques to prevent or to reduce the likelihood or frequency of failures; (2) to identify and correct the causes of failures that do occur despite the efforts to prevent them, and to determine ways of coping with failures that do occur, if their causes have not been corrected; and (3) to apply methods for estimating the likely reliability of new designs, and for analyzing reliability data. This emphasis on priorities is somewhat dictated by an economic notion of commercial reasonability, in terms of balancing cost minimization against production of reliable products, which entails reasonable foreseeability of possible causes of failures (e.g., fault analysis) and reasonable measures of prevention and mitigation (e.g., redundancies) as well as suitably proficient application of methods for analyzing designs and data.

Within the context of both contractual and tort liability, with regard to principles of due care and apportionment of liability—associated with the notion of reasonable foreseeability, causation, and but-for damages—there is growing technical evidence regarding the fragility of ML and AI algorithms associated with three primary contributing factors: (1) morphing of the objective function and fitness functions by the algorithm(s), resulting in drift and/or misspecification of the algorithm; (2) challenges with ambiguity of context, settings, or lack of specificity, as well as amorphous or nonstationary situation parameters; and (3) high dimensionality of the data leading to a tendency for minor feature artifacts to effect large misclassification or systematic error.15

By an example of practice principles relevant to due care and entrustment, as well as foreseeability, causation, and but-for damages, a 2017 blog article summarizing a research paper on accidents in ML systems highlights five different failure modes for ML, relevant technical research, and directions for protecting against failures:

The authors of the paper define accidents as:

unintended and harmful behavior that may emerge from machine learning systems when we specify the wrong objective function, are not careful about the learning process, or commit other machine learning related errors. . . . As AI capabilities advance and as AI systems take on increasingly important societal functions, we expect the fundamental challenges discussed in this paper to become increasingly important.” . . . As agents become more complex, and we start to deploy them in more complex environments both the opportunity for and the consequences of side effects increase. At the same time, agents are being given increasing autonomy. What could possibly go wrong? The authors explore five different failure modes in machine learning . . . 1. Negative side effects . . . 2. Reward hacking . . . 3. Insufficient oversight . . . 4. (Un)safe exploration . . . [and] 5. Fragility in the face of distributional shift . . . .16

Another technical research paper cited by the blog examines the potential for accidents and possible prevention mechanisms in AI and ML systems, based on principles typically associated with due care, negligence, and liability attribution for handling invasive species, animal breeding, or infectious agent experimentation in epidemiology:

Selection gone wild examples explore the divergence between what an experimenter is asking of evolution, and what they think they are asking. Experimenters often overestimate how accurately their measure (as used by the fitness function) reflects the underlying outcome success they have in mind. “This mistake is known as confusing the map with the territory (e.g., the metric is the map, whereas what the experimenter intends is the actual territory).” “[I]t is often functionally simpler for evolution to exploit loopholes in the quantitative measure than it is to achieve the actual desired outcome . . . digital evolution often acts to fulfill the letter of the law (i.e., the fitness function) while ignoring its spirit.” . . . “Many researchers [in the nascent field of AI safety] are concerned with the potential for perverse outcomes from optimizing reward functions that appear sensible on their surface. The list compiled here provides additional concrete examples of how difficult it is to anticipate the optimal behavior created and encouraged by a particular incentive scheme. Additionally, the narratives from practitioners highlight the iterative refine-ment of fitness functions often necessary to produce desired results instead of surprising, unintended behaviors.” . . . [W]e may encounter similar unintended consequences of incentive systems when designing token schemes.17

Open Questions

Whether a mobile assistant attempts to balance competing or even conflicting objectives of users and (one or more) vendors in making purchases via a smart appliance, directing an autonomous vehicle, or regulating environmental conditions in a home or office, the apportionment of agency and its consequences will be both a technical inquiry and an economic analysis, pertaining to risk-adjusted welfare allocation dependent upon economics of information and uncertainty.

How the scope of authority for software agents will be determined remains to be seen. The most straightforward instances will likely be those governed by a written agreement (e.g., the terms of use). However, in the absence of an explicit written clause on a particular instance, the scope of authority for a software agent in a particular situation is as yet unclear. Expectations of those engaging with software agents, which would contribute to determining the degree of authority, whether implied, actual, or ostensible, are likely not yet established. The open question remains as to whether and to what degree expectations and representations of the users and vendors will dictate apparent authority to obligate a respective principal in a particular instance.

That said, with respect to reasonable due care, foreseeability, causation, and damages, reliability engineering as a risk mitigating discipline is likely to play a central role in the underlying fact pattern.

Endnotes

1. See Hyacinth S. Nwana, Software Agents: An Overview, 11 Knowledge Engineering Rev. 205 (1996); see also Stan Franklin & Arthur C. Graesser, Is It an Agent, or Just a Program?: A Taxonomy for Autonomous Agents, in Intelligent Agents III: Agent Theories, Architectures, and Languages (Jörg P. Müller et al. eds, 1996); Michael Wooldridge & Nicholas R. Jennings, Intelligent Agents: Theory and Practice, 10 Knowledge Engineering Rev. 115 (1995).

2. By way of illustration, a smart speaker commonly refers to an internet-enabled (i.e., connected) speaker controlled by spoken commands and capable of streaming audio content and relaying information, as well as communicating with other devices. The Internet of Things (IoT) application of networked and distributed sensors throughout the smart (i.e., automated) home involves distributed or networked active online measurement and control to automate heating, ventilation, air conditioning, and security, but can involve appliances as well such as refrigerators, stoves, washers, dryers, and many other items. For additional background that also highlights cybersecurity vulnerabilities of smart appliances, see Ken Hanly, Costs, Advantages and Disadvantages of Smart Homes, Digital J. (July 18, 2017), http://www.digitaljournal.com/tech-and-science/technology/costs-advantages-and-disadvantages-of-smart-homes/article/497912.

3. Rip Empson, Three Companies Chi-Hua Chien of Kleiner Perkins Would Love to Invest In, TechCrunch (July 29, 2011), https://techcrunch.com/2011/07/29/three-companies-chi-hua-chien-of-kleiner-perkins-would-love-to-invest-in.

4. See Martin Rugfelt, Artificial Intelligence’s Impact on Marketing, DMN (May 1, 2014), https://www.dmnews.com/customer-experience/article/13036746/artificial-intelligences-impact-on-marketing.

5. See Charles Jolley, Personal Assistant Bots like Siri and Cortana Have a Serious Problem, VentureBeat (July 17, 2016), https://venturebeat.com/2016/07/17/personal-assistant-bots-like-siri-and-cortana-have-a-serious-problem; see also B. Douglas Bernheim & Michael D. Whinston, Common Agency, 54 Econometrica 923 (1986); Michael Peters, Common Agency and the Revelation Principle, 69 Econometrica 1349 (2000).

6. See Mark V. Pauly, The Economics of Moral Hazard: Comment, 58 Am. Econ. Rev. 531 (1968); see also Eva I. Hoppe, Observability of Information Acquisition in Agency Models, 119 Econ. Letters 104 (2013); David Rowell & Luke B. Connelly, A History of the Term “Moral Hazard, 79 J. Risk & Ins. 1051 (2012).

7. Restatement (Second) of Agency § 1 (Am. Law Inst. 1958) (“(1) Agency is the fiduciary relation which results from the manifestation of consent by one person to another that the other shall act on his behalf and subject to his control, and consent by the other so to act. (2) The one for whom action is to be taken is the principal. (3) The one who is to act is the agent.”).

8. Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Cal. L. Rev. 513, 516 (2015) (“Courts that struggled for the proper metaphor to apply to the Internet will struggle anew with robotics.”); Ignacio N. Cofone, Servers and Waiters: What Matters in the Law of A.I., 21 Stan. Tech. L. Rev. 167, 174 (2018) (“The biggest challenge of emerging A.I. law is finding the appropriate legal category for A.I. agents.”); see also Scott Brewer, Exemplary Reasoning: Semantics, Pragmatics, and the Rational Force of Legal Argument by Analogy, 109 Harv. L. Rev. 923, 938–63 (1996); Emily Sherwin, A Defense of Analogical Reasoning in Law, 66 U. Chi. L. Rev. 1179, 1179 (1999); Cass R. Sunstein, On Analogical Reasoning, 106 Harv. L. Rev. 741, 741 (1993).

9. See Jean-Jacques Laffont & David Martimort, The Theory of Incentives: The Principal-Agent Model (2002); Eric A. Posner, Agency Models in Law and Economics (Univ. of Chi. John M. Olin Law & Econ., Working Paper No. 92, 2000).

10. Three broad classes of agency are as follows: (1) universal agents possess broad authority to act on behalf of the principal, e.g., a power of attorney or a professional relationship; (2) general agents possess more limited authority to conduct a series of transactions over a specified period of time; and (3) special agents are authorized to conduct either a single transaction or a specified series of discrete transactions over a specified period of time. Authority refers to the binding scope of agency granted by the principal, of which three types are recognized by law: actual authority (express or implied), apparent authority, and ratified authority. Express actual authority means an agent has been expressly notified that it may act on behalf of a principal. Implied actual authority (i.e., usual authority) is deemed reasonably necessary for an agent to carry out its express authority, which might be inferred by the role of an agent. As a general principle, an agent is only entitled to indemnity from the principal if acting within the scope of its actual authority; otherwise, it may be liable to a third party for breach of the implied warranty of authority. In contrast, apparent authority (i.e., ostensible authority) arises when a principal’s words or conduct would lead a reasonable third party to rely on the authority for an agent’s actions, regardless of whether the principal and agent expressed the scope of action. For example, where one person appoints another to a position with agency, knowledge of such appointment may entitle others to assume apparent authority ordinarily entrusted to such a position. If a principal creates the impression that an agent is authorized but there is no actual authority, third parties are protected to the extent they act reasonably. Sometimes termed agency by estoppel or the doctrine of holding out, the principal will be estopped from denying agent authority if third parties acted to their own detriment in relying on the agent’s representations. Ratified authority occurs when an agent acts without authority, but the principal later approves the act.

11. See Michael L. Rustad & Thomas H. Koenig, The Tort of Negligent Enablement of Cybercrime, 20 Berkeley Tech. L.J. 1553, 1578–80 (2005).

12. See Inst. of Elec. & Elecs. Eng’rs, IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries (1990); John Moubray, Reliability-Centered Maintenance 250–60 (2d ed. 2008). Reliability testing of functionality, performance interoperability, and cybersecurity includes such practices as observational validation analysis, fuzzing, edge case, and stress testing. Edge-case testing extends observational analysis and validation against ground truth in lab experimentation and field trials to problems that occur only at an extreme (maximum or minimum) operating parameter, i.e., an edge case. See Josh Zimmerman, Unit Testing, in Principles of Imperative Computation (2012), https://www.cs.cmu.edu/~rjsimmon/15122-s13/rec/07.pdf.

13. In practice, reliability engineering deals with the estimation, prevention, and management of high levels of “lifetime” engineering uncertainty and risks of failure, in which stochastic variables define and affect reliability, although reliability is not (solely) a matter of mathematical and statistical analysis. See Juran’s Quality Control Handbook 24 (J.M. Juran & Frank M. Gryna eds., 4th ed. 1998); Patrick D.T. O’Connor, Practical Reliability Engineering (5th ed. 2011); J.H. Saleh & Ken Marais, Highlights from the Early (and Pre-) History of Reliability Engineering, 91 Reliability Engineering & Sys. Safety 249 (2006); Albertyn Barnard, Lambda Consulting, Why You Cannot Predict Electronic Product Reliability, Presentation at the International Applied Reliability Symposium (Mar. 29, 2012), http://www.lambdaconsulting.co.za/2012ARS_EU_T1S5_Barnard.pdf.

14. Reliability engineering focuses on costs of failure caused by system downtime and costs of redundancy, repair and replacement, support, and warranty claims. Safety engineering normally focuses more on preserving life and nature than on cost, and therefore deals only with particularly dangerous system-failure modes. High reliability (safety factor) levels also result from good engineering and from attention to detail, and almost never from only reactive failure management (using reliability accounting and statistics).

15. See Gary Marcus & Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust (2019); Devin Coldewey, This Clever AI Hid Data from Its Creators to Cheat at Its Appointed Task, TechCrunch (Dec. 31, 2018), https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task; Andrew Ng, What Artificial Intelligence Can and Can’t Do Right Now, Harv. Bus. Rev. (Nov. 8, 2016), https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now; Jason Pontin, Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning, Wired (Feb. 2, 2018), https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning; Tiernan Ray, Google Ponders the Shortcomings of Machine Learning, ZDNet (Oct. 20, 2018), https://www.zdnet.com/article/google-ponders-the-shortcomings-of-machine-learning; Michael A. Alcorn et al., Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects, Paper Presented at the 2019 Conference on Computer Vision and Pattern Recognition (Nov. 28, 2018), https://arxiv.org/pdf/1811.11553.pdf.

16. Adrian Colyer, Concrete Problems in AI Safety, Morning Paper (Nov. 29, 2017), https://blog.acolyer.org/2017/11/29/concrete-problems-in-ai-safety (citing Dario Amodei et al., Concrete Problems in AI Safety (June 21, 2016), https://arxiv.org/pdf/1606.06565.pdf).

17. Adrian Colyer, The Surprising Creativity of Digital Evolution, Morning Paper (Mar. 30, 2018), https://blog.acolyer.org/2018/03/30/the-surprising-creativity-of-digital-evolution (citing Joel Lehman et al., The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities (Mar. 9, 2018), https://arxiv.org/pdf/1803.03453.pdf). For a similar discussion applied to blockchain applications, see Trent McConaghy, Can Blockchains Go Rogue?, Ocean Protocol (Feb. 27, 2018), https://blog.oceanprotocol.com/can-blockchains-go-rogue-5134300ce790.

Entity:
Topic:

By David K. A. Mordecai, PhD

David K.A. Mordecai, PhD, lead investigator/principal scientist at RiskEcon® Lab for Decision Metrics and visiting scholar at NYU’s Courant Institute of Mathematical Sciences, is a vice chair of the ABA SciTech AI & Robotics and Nanotechnology Committees. The author acknowledges helpful comments from Matthew Henshon, Nicholas Joseph, and Samantha Kappagoda.