AI systems should be designed responsibly from the very start, applying the principles of privacy by default or privacy by design—or, if you will, “ethics by design.” Practically, this includes implementing adequate technical and organizational measures and procedures (proportionate to the type of system being designed or implemented) to ensure that data subjects’ privacy and personal information are respected. Developers should be assessing and documenting expected and potential impacts on individuals and society at large during an AI project’s entire life cycle and identifying specific requirements for fair and ethical use. Moreover, while the use of AI is to be encouraged, it should not occur at the expense of human or individual rights. This includes respecting data protection or privacy rights—including rights to access, the right to object to processing, and the right to erasure—and guaranteeing, if applicable, an individual’s right not to be subject to a decision based solely on automated processing if the decision significantly impacts them. Regardless, individuals should always have the right to challenge AI system decisions.
Given the foregoing, where should an organization that wishes to acquire and use an AI system begin, and what should they review before implementing that system? The following checklist may be of assistance and is intended to serve as a starting point, not an exhaustive compendium, for some of the legal and ethical considerations involved. Expanding on the European Commission’s existent Ethics Guidelines for Trustworthy AI, this checklist considers emerging AI applications and their concomitant legal issues. Although this checklist is largely based on European and Canadian privacy law requirements, the considerations involved are applicable to any jurisdiction and are a work in progress.