September 25, 2020 Feature

Chatbots Meet COVID-19: Why Lawyers Should Pay Attention

By Karen Silverman and Heather Deixler
Smiling robot interacting with humans in friendly manner for customer service purposes online or on the telephone.

Smiling robot interacting with humans in friendly manner for customer service purposes online or on the telephone.

filo/DigitalVision Vectors via Getty Images

A s the COVID-19 pandemic has unfolded, so has the widespread use of artificial intelligence (AI) in battling the disease.1 From the early days of detecting the pandemic, to preventing the spread of the virus and responding to and treating those who have become ill, AI technologies have been introduced at each stage. The Organization for Economic Co-operation and Development (OECD), an international organization that is focused on establishing evidence-based international standards and finding solutions to social, economic, and environmental challenges, developed a policy for using AI to help combat COVID-19, noting that “AI technologies and tools play a key role in every aspect of the COVID-19 crisis response.”2 The OECD policy highlights the importance of ensuring that AI systems are “trustworthy and aligned” with OECD AI principles, which means that they are “transparent, explainable, robust, secure and safe” and that actors that are involved in the development and use of such systems remain accountable.3 When AI is deployed in healthcare, these tenets are especially crucial. Humans must remain accountable for AI systems, and it is important that users understand how their data are being collected and the ways in which they will be used and shared. To illustrate the challenges, we hope to provide an overview of some of the ways AI has been used in combating the COVID-19 pandemic and then work through a realistic but hypothetical scenario that raises a number of legal and ethical questions.

Not a member of the ABA’s  Science & Technology Law Sections? Join now to view premium content.

Detecting the Early Signs of a Spreading Infection using AI

Well before “social distancing” and COVID-19 became universal themes for 2020, certain AI companies had already been alerting their clients to unusual spikes in flu-like symptoms. BlueDot, a Canadian company, was to be among the first in the world to have identified the emerging COVID-19 risk in Hubei Province and sounded the alarm to its clients of the potential risk through its “Insights” platform.4 BlueDot had used machine learning to power its “global early warning system,” which is designed to locate potential infectious disease threats and provide alerts to its clients based on its collected threat intelligence.5 BlueDot reports that its intelligence is based upon over 40 pathogen-specific datasets that reflect disease mobility and potential outbreak potential.6

Preventing the Spread of COVID-19

As the reality of the COVID-19 pandemic set in, multiple AI technologies were quickly employed for a range of preventative measures. Chief among them are contact-tracing applications, which have been proliferating to monitor and track the spread of COVID-19 in real time. In April, Apple and Google announced a collaboration on COVID-19 contact tracing technology, which includes application programming interfaces (APIs) and operation system–level technology to assist in the implementation of contact tracing apps.7 A number of governments have launched their own contact-tracing apps, including Singapore’s own TraceTogether app, which launched in March 2020 and was reported to be the first nationwide deployment of a Bluetooth contact tracing solution.8 However, the use of contact-tracing apps has raised a number of privacy concerns around the use of data collected through these apps and the extent to which individuals are willing to subject themselves to government surveillance via mobile apps.9 Along with privacy issues are concerns that there will be an overreliance on the technology and a concern that humans may not be sufficiently advised or required to remain engaged in design, monitoring, or decision-making.10

Responding to the COVID-19 Pandemic

AI has been deployed across the board, from drug discovery to population health, in an effort to respond to the COVID-19 pandemic. And while some health systems have been scrambling to respond to the increased call volume and overwhelmed emergency departments, they are turning to chatbot solutions to alleviate the administrative burdens of intake and triage.11 A chatbot is a software application that uses AI and natural language processing to simulate and process human conversation.12 By many accounts, these chatbot digital assistants are speeding up initial triage processes and have increased the efficient use of constrained medical resources during the pandemic.

When deployed in the context of patient medical care, AI is typically used as “augmented intelligence” to supplement and enhance professional clinical judgment rather than replace or override it.13 As innovative AI solutions emerge in the marketplace to detect, prevent, and respond to the COVID-19 pandemic, current laws, regulations, and policies raise as many questions as they answer. To illustrate some of the issues related to the use of chatbot systems that may arise, let us consider the following hypothetical chatbot scenario:

An application has been developed that utilizes natural language processing to perform initial patient intake and triage for COVID-19 symptoms and treatment (the App). At this point, the App does not yet have the ability to take images of the patient to analyze for sentiment and/or health characteristics. The current App function is used by a team of primary care health care providers (the Practice) who license the App from the third-party developers and receive the initial “assessment” from the App. The Practice has only one physical office in California but has plans to expand to other U.S. states in the next year. The App is also available for download in both the U.S. and Europe. The Practice reviews the App’s assessment and uses the information to provide clinical care, as needed. The App is also licensed to other medical facilities in California that also use the recommendations gained from the App in their practices. The Practice has been approached by a number of third-party payors, research institutions, and pharmaceutical companies to utilize the information obtained from the App. One of the Practice’s patients has heard from a friend, and then read on the Internet, that his particular combination of symptoms is likely to be alleviated by an otherwise toxic combination of over-the-counter medications, and another patient has been analyzed via the App, but he failed to report his allergy to shrimp. In the meantime, the Practice has asked you to think through some of the potential legal and ethical issues they may face as they implement the App.

Where Is the App Sourcing Its Data From? Are the Data to Be Trusted?

AI is inherently dependent upon the type of data that are used to train the algorithms and to feed the working model. The source of those data becomes even more complicated when the AI is being used in the healthcare context. The nature, quality, and robustness of your data will almost certainly impact the quality of your algorithmic results. Already there have been concerns raised about potential biases and deficiencies in the reliability of the data that are used to train algorithms that are using electronic health record (EHR) information.14 For example, some of the patients whose data were used for training the app may have had limited access to healthcare and may have had fewer diagnostic tests and adequate treatment for chronic diseases. Therefore, these patients may have insufficient information in their EHR.15 Thus, the EHR data from these patients may be insufficient for training purposes.

What Can the Practice Do with the App’s Data?

Understanding the Complex Legal Landscape

NB: As the hypothetical scenario set forth above involves a California physician practice that is collecting information from California residents, the analysis discussed below is based primarily on applicable federal and California state laws and regulations. We note, however, that other states may have similar laws or will have them in the future.

In order to understand what rights the Practice has to use and disclose the data obtained from the App, we need to understand where the Practice’s patients are located to apply the appropriate legal regime. As noted above, the Practice currently has only one office in California, but it has plans to expand to other states in the next year. As a healthcare provider that collects personal information from California residents, including individually identifiable health information, the Practice needs to consider the patchwork of potentially applicable data privacy and security laws, including:

  1. the federal Health Insurance Portability and Accountability Act of 1996, as amended (HIPAA),16
  2. the California Confidentiality of Medical Information Act (CMIA),17
  3. the California Online Privacy Protection Act of 2003 (CalOPPA),18
  4. the California Consumer Privacy Act of 2018 (CCPA),19
  5. California’s new Bot Disclosure Law, and
  6. the European Union General Data Protection Regulation (the GDPR).

The interaction of all of these requirements is not always obvious or intuitive, and in any event requires careful analysis. For example, the CCPA has certain exemptions for “protected health information” (PHI) collected by covered entities and business associates regulated by HIPAA, and “medical information” governed by the CMIA, as well as wholesale exemptions for providers of healthcare regulated by the CMIA and covered entities regulated by HIPAA. However, the entity exemptions only apply to information other than PHI or medical information to the extent the provider or covered entity maintains patient information as required under the CMIA and HIPAA. The CCPA would therefore still apply to personal information collected by the Practice, including through the Practice’s website or otherwise, that may not constitute PHI, medical information, or “patient information.”

To the extent that the Practice is a covered entity under HIPAA, the Practice would be permitted to use and disclose any PHI obtained through the App for treatment, payment, and healthcare operations purposes without patient authorization. Healthcare operations is broadly defined under HIPAA to include quality assessment and improvement activities, including outcomes evaluation of development of clinical guidelines, provided that “the obtaining of generalizable knowledge is not the primary purpose of any studies resulting from such activities.”20

Also, under the California Bot Disclosure Law, the Practice would need to inform patients interacting with the App that the chatbot is a bot to the extent that the chatbot were considered to be interacting with patients “to incentivize a purchase or sale of goods or services in a commercial transaction.”21 This statute makes it unlawful for any person to use a bot to communicate or interact online with another person in California “with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction.”22 Even if the Practice determined that the law did not apply to the App’s interactions with patients, it would be wise for the Practice to be transparent with its patients and disclose its chatbot’s true identity.

Because the App is also available for download and use in Europe, the Practice needs to consider whether the information they receive from the App constitutes “personal data” and is subject to regulation under the GDPR. The GDPR defines personal data as “any information relating to an identified or identifiable natural person.”23 The GDPR applies to the processing of personal data, which is broadly defined to include the “collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction” of such data.24 Any business or person that processes the personal data of individuals within the European Economic Area (EEA) and is either established in the EEA (e.g., has an office or employees located within the EEA) or otherwise offers goods or services to, or monitors the behavior of, individuals within the EEA is covered by the GDPR. Because the Practice is only treating patients in the United States, and not otherwise marketing its services to individuals located within the EEA or monitoring the behavior of individuals within the EEA, its activities would not appear to be captured by the GDPR, even though the App may be available to individuals located in Europe. However, the App likely would be considered to be offering goods or services to and monitoring the behavior of individuals located within the EEA and subject to regulation by the GDPR.

What Data Use Has the Patient Consented to?

The Practice needs to carefully consider the terms of its patient consent and ensure that it is transparent regarding the manner in which patients’ information is going to be collected, used, and disclosed. As a “covered entity” under HIPAA, the Practice will need to provide its patients with a Notice of Privacy Practices that informs patients of how their PHI will be used and disclosed and will want to include language regarding any secondary uses of such PHI. As a business operating online in California, under CalOPPA, the Practice needs to include an online privacy policy on its app and its website and would need to carefully analyze whether it is collecting personal information under the CCPA that would not otherwise be exempt and would therefore need to be disclosed. To the extent the Practice wants to enter into data-sharing agreements with third-party payors, research institutions, and pharmaceutical companies, the Practice may need to revisit its patient consent, online privacy policy, and Notice of Privacy Practices to ensure that patients have been provided with proper notice and the ability to opt in or opt out, where necessary.

Can the Practice Share Its Data?

In the hypothetical, the Practice has been approached by a number of third-party payors, research institutions, and pharmaceutical companies to utilize the information obtained from the App. Because the Practice is regulated under HIPAA, the CMIA, and CCPA, the Practice will need to take into account the rules for sharing patient information under these regimes.

For instance, HIPAA prohibits the “sale” of PHI, which is generally defined as a disclosure of PHI by a covered entity or business associate, where the “covered entity or business associate directly or indirectly receives remuneration from or on behalf of the recipient of the protected health information in exchange for the protected health information.”25 Once PHI has been de-identified per HIPAA standards, the de-identified data are no longer considered to be PHI and therefore no longer subject to HIPAA.26

Thus, if the Practice wants to enter into a data-sharing agreement with a pharmaceutical company, any PHI would generally need to be de-identified in accordance with the stringent HIPAA standards. There are two methods for de-identification under HIPAA:

  1. the Safe Harbor Method, which requires (i) the removal of eighteen specific identifiers and (ii) the absence of actual knowledge by the covered entity or business associate that the remaining information could be used alone or in combination with the other information to identify the individual, and
  2. the Expert Determination Method, which is a formal determination by a qualified “expert” as defined in 45 C.F.R. § 164.514(b).27

To the extent that the shared dataset also contains personal information subject to the CCPA that is not protected health information or otherwise “patient information,” such as website visitor information, the Practice also needs to consider the standards for de-identification under the CCPA, which are different than the HIPAA standards.28 Information that has been de-identified under HIPAA may remain “personal information” under the CCPA (e.g., when HIPAA de-identified data include certain provider-level identifiers about California residents). The Expert Determination Method and Safe Harbor Method under HIPAA may not result in data that meet the CCPA de-identification standards.

When Does the Chatbot Risk Becoming a Practitioner of Medicine?

When does the chatbot’s actions cross over into the practice of medicine? In general, the chatbot should not cross the line into diagnosing patients, or clinical decision-making; that must be done by a licensed medical professional. California (other states may differ) prohibits the practice of medicine by any “unlicensed” individual as defined in California Business & Professions Code section 2052(a).29 A key factor here is to what extent the App is conducting an assessment of the patient’s symptoms in order to diagnose and/or treat the patient. Is the App performing a technical assessment of the patient as a “medical assistant”?30 Or is the App conducting an assessment of patient symptoms, which could constitute medical decision-making? What if the patient interprets the chatbot’s output mistakenly and proceeds down an errant course of action? As chatbots’ ability and sophistication to perform intake and triage expand, the scope of practice for such chatbots must be carefully considered in order to determine the extent to which these tools are authorized to “practice” or, in the alternative, the need for regulation and licensure of the tools themselves.31

What If the App Adds Facial Recognition, Biometric, and/or Sentiment Analysis Features?

Collection, analysis, and use of biometric data (essentially measurement and other data derived from a body), including facial recognition services, raise a host of new issues. The future promise of AI in healthcare is its ability to better analyze vast combinations of physical features to aid in diagnosis and treatment, assess mental and emotional health, improve customer service, improve adherence to treatment protocols, and so forth. The risks of AI include (at least) inaccuracy and bias in each of these areas. So before adding such features, additional diligence for quality and functionality will be required. Efforts are already underway to regulate in the area of facial recognition services, which will cover both privacy and nondiscrimination requirements in unique ways; however, its progress is halting at best. In practice, it may also be useful for both the App developer and the Practice to consider the likely direction oversight will take (e.g., required third-party audits, human oversight of decision-making, and regular assessments), and any special features of medical use, and to draft their agreements to incorporate some of these features.32

Who Is Liable for Outcomes Resulting from Use of the Chatbot?

While beyond the scope of this article to resolve, the use of chatbots as an intake or triage assistant raises a multitude of questions around liability issues. Traditional product liability rules distinguish between those products that are defective and/or dangerous and those that are used negligently or recklessly.33 Duties of care, of course, can shift depending upon an actor’s relationship, training, and status.34 The products liability concepts may help here, but they will not answer all the possible questions as to liability issues. For instance, under what circumstances (if any) does the use of AI to augment a physician’s judgment shift responsibility for the accuracy of that judgment between the App and the Practice? How was the App trained, and by whom? Does the App explain its decisions to the physician or to the patient? Is the App learning from other patient inputs? Is the App making triage recommendations or actionable decisions? How vigorously has the Practice conducted “diligence” on the quality of the App before deploying it, and by what standards? How has it trained its medical staff to interpret the App? What if the App “interviews” a patient who neglects to identify a known allergy? What about an unknown allergy? Does it matter if the Practice is using a paid version of the App or a free one? What if the App recommends one treatment and the Practice overrides the recommendation, to the detriment of a patient? How will the efficiencies and improved outcomes be measured, and how will they be counted to offset occasional errors? We have ways to deal with much of this in a nondigital world, but not all of it. What will need to change to accommodate the introduction of ever-more-autonomous agents?

At this point, it is difficult to see if litigated outcomes will be the best source of guidance or resolution on these issues and other difficult questions. Current policymaking on appropriate (and inappropriate) uses of AI-driven tools is still fraught with problems, and fixes seem to be far off at this point in time. Thus, businesses, professionals, and governments alike are left to navigate murky waters based upon their best interpretations of best practices. There are some resources out there for developing responsible AI and data policies, but determining how to make those policies work in practice and in context will demand a very individualized effort. These efforts will require senior leadership in business (and their professional advisors, including the legal community) to engage on and to prioritize these questions and to assign management attention to them, well before claims arise.

*****

Today, we are at the very beginning of addressing the myriad of issues related to the world of AI discussed above. It is inevitable that AI will change the way every aspect of healthcare is delivered and will challenge how we apply existing legal rules and standards, and where we need to develop new ones. The absence of specific regulations, and the vagaries and inconsistencies between eventual regulations, will leave large gaps for businesses as well as the legal community to navigate, and for a long time to come.

Endnotes

1. For purposes of this article, we use the terms artificial intelligence (AI) and machine learning (ML) interchangeably. In fact, ML is one technique for AI, and the most prevalent in use today.

2. OECD, Using Artificial Intelligence to Help Combat COVID-19 (Apr. 23, 2020), https://read.oecd-ilibrary.org/view/?ref=130_130771-3jtyra9uoh&title=Using-artificial-intelligence-to-help-combat-COVID-19.

3. Id.

4. See BlueDot, https://bluedot.global/.

5. See id.

6. Id.

7. See Press Release, Apple Newsroom, Apple and Google Partner on COVID-19 Contact Tracing Technology (Apr. 10, 2020), https://www.apple.com/newsroom/2020/04/apple-and-google-partner-on-covid-19-contact-tracing-technology/.

8. TraceTogether, https://support.tracetogether.gov.sg/hc/en-sg.

9. J. Bay et al., Gov’t Tech. Agency, Singapore, BlueTrace: A Privacy-Preserving Protocol for Community-Driven Contact Tracing Across Borders (2020), https://go.nature.com/2wwcwg4.

10. M. Zastrow, Coronavirus Contact-Tracing Apps: Can They Slow the Spread of COVID-19, Nature (May 19, 2020), https://www.nature.com/articles/d41586-020-01514-2.

11. K. Ebong, Beyond Screening: How Covid-19 Chatbots Support Patient Navigation and Health Checks, MedCityNews (Apr. 8, 2020), https://medcitynews.com/2020/04/beyond-screening-how-covid-19-chatbots-support-patient-navigation-and-health-checks/?rf=1.

12. What Is a Chatbot, Oracle Digital Assistant, https://www.oracle.com/solutions/chatbots/what-is-a-chatbot/.

13. E. Crigger & C. Khoury, Making Policy on Augmented Intelligence in Health Care, 21 AMA J. Ethics E188 (Feb. 2019), https://journalofethics.ama-assn.org/article/making-policy-augmented-intelligence-health-care/2019-02.

14. M. Gianfrancesco et al., Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data, 178 JAMA Internal Med. 1544 (Nov. 1, 2018), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6347576/.

15. Id. (citing N.C. Arpey et al., How Socioeconomic Status Affects Patient Perceptions of Health Care, 8 J. Primary Care & Cmty. Health 169 (July 8, 2017), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5932696/).

16. Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191, 110 Stat. 1936, as amended by the Health Information Technology for Economic and Clinical Health Act, Pub. L. No. 111-005, 42 U.S.C. § 17921 et seq., and their implementing regulations at 45 C.F.R. pts. 160, 162 & 164.

17. Cal. Civ. Code § 56 et seq.

18. Cal. Bus. & Prof. Code §§ 22575–22579.

19. Cal. Civ. Code § 1798.100–.199 (effective Jan. 1, 2020).

20. See 45 C.F.R. § 164.501 (defining “health care operations”).

21. Cal. Bus. & Prof. Code § 17941.

22. Id. §§ 17940–17943.

23. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (Article 4(l)), 2016 O.J. (L 119) 1.

24. Id.

25. 45 C.F.R. § 164.502(a)(5)(ii).

26. Id. § 164.502(d).

27. Id. § 164.514(b).

28. See Cal. Civ. Code § 1798.140(h).

29. Cal. Bus. & Prof. Code § 2052(a).

30. See Medical Assistants, Med. Bd. of Cal., https://www.mbc.ca.gov/Licensees/Physicians_and_Surgeons/Medical_Assistants.

31. See N. Terry, Of Regulating Healthcare AI and Robots, 18 Yale J. Health Pol’y, L. & Ethics 133, 155 (2019), https://yjolt.org/sites/default/files/21_yale_j.l._tech._special_issue_133.pdf.

32. K. Silverman & A. Ortega, Emerging Legislation on Commercial Uses of Facial Recognition Shows the Work Ahead, World Econ. F. (June 25, 2020), https://www.weforum.org/agenda/2020/06/emerging-legislation-on-commercial-uses-of-facial-recognition-shows-the-work-ahead.

33. Compare Restatement (Third) of Torts: Products Liability § 1 (Liability of Commercial Seller or Distributor for Harm Caused by Defective Products) (May 20, 1997), with Restatement (Third) of Torts: Products Liability § 8 (Liability of Commercial Seller or Distributor of Defective Used Products) (May 20, 1997).

34. See Jud. Council of Cal., Civil Jury Instructions, Series 1200 (2016 ed.), https://www.justia.com/documents/trials-litigation-caci.pdf (providing and explaining the tests under various theories of liability for product-related harms).

Want more personalized content? Tell us your interests.

Entity:
Topic:

By Karen Silverman and Heather Deixler

Karen Silverman is the CEO and founder of The Cantellus Group and a retired partner at Latham & Watkins, LLP. The Cantellus Group and Cantellus Legal, P.C., advise leaders in business and government on how to harness the benefits of AI and other frontier technologies, and to mitigate its risks. Heather Deixler is counsel in the San Francisco and Silicon Valley offices of Latham & Watkins LLP, where she advises companies operating in the healthcare industry on data privacy and security matters.