Despite the significant benefits that AI systems can offer users and society in general, any organization deploying an AI system must protect core values such as fairness, transparency, and privacy by design. In operation, many AI systems include inherent bias, including discrimination against certain individuals or groups. Unexplainable AI system decisions also raise fundamental questions of accountability, not only with respect to privacy and data protection law but also involving liability in the event of errors and harm to individuals. Given ongoing concerns about the possible malicious use of AI and related risks to privacy and data protection, prior to acquisition of any AI system, prospective purchasers should consider critical guiding principles including accountability, transparency, and intelligibility.
AI systems should be designed responsibly from the very start, applying the principles of privacy by default or privacy by design—or, if you will, “ethics by design.” Practically, this includes implementing adequate technical and organizational measures and procedures (proportionate to the type of system being designed or implemented) to ensure that data subjects’ privacy and personal information are respected. Developers should be assessing and documenting expected and potential impacts on individuals and society at large during an AI project’s entire life cycle and identifying specific requirements for fair and ethical use. Moreover, while the use of AI is to be encouraged, it should not occur at the expense of human or individual rights. This includes respecting data protection or privacy rights—including rights to access, the right to object to processing, and the right to erasure—and guaranteeing, if applicable, an individual’s right not to be subject to a decision based solely on automated processing if the decision significantly impacts them. Regardless, individuals should always have the right to challenge AI system decisions.
Given the foregoing, where should an organization that wishes to acquire and use an AI system begin, and what should they review before implementing that system? The following checklist may be of assistance and is intended to serve as a starting point, not an exhaustive compendium, for some of the legal and ethical considerations involved. Expanding on the European Commission’s existent Ethics Guidelines for Trustworthy AI, this checklist considers emerging AI applications and their concomitant legal issues.1 Although this checklist is largely based on European and Canadian privacy law requirements, the considerations involved are applicable to any jurisdiction and are a work in progress.
1. European Commission, “Ethics Guidelines for Trustworthy AI - Independent High-Level Expert Group on Artificial Intelligence” (Apr. 8, 2019), https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence.