Immigration Legislation and AI
In 2017, the Immigration and Refugee Protection Act was amended to include a section on Electronic Administration. The provisions related to AI systems are as follows:
Decision, determination, or examination by automated system
186.1 (5) For greater certainty, an electronic system, including an automated system, may be used by the Minister to make a decision or determination under this Act, or by an officer to make a decision or determination or to proceed with an examination under this Act if the system is made available to the officer by the Minister.
Requirement to use electronic means.
186.3 (2) The regulations may require a foreign national or another individual who, or entity that, makes an application, request or claim, submits any document or provides information under this Act to do so using electronic means, including an electronic system. The regulations may also include provisions respecting those means, including that system, respecting the circumstances in which that application, request or claim may be made, the document may be submitted, or the information may be provided by other means and respecting those other means.
These subsections require that applicants use automated decision-making systems once they are implemented by Immigration, Refugees and Citizenship Canada (IRCC).
Automated Decision Systems in Canada
The Government of Canada disclosed its intention to use AI with its report, Responsible Artificial Intelligence in the Government of Canada – White Paper Series. It has the objective of using AI technologies to improve administrative decision-making processes. The purpose of an automated decision system is to either assist or replace personnel. IRCC is increasing the automation of its services because of the volume growth of temporary resident applications, which include Study Permits, Work Permits and Temporary Resident Visas for visitors. The goal of automating these tasks is to increase efficiency and to reduce the processing time of applications.
A 2018 IRCC pilot program used an automated decision-making system for temporary and permanent residence applications from China and India. Under this program, low risk assessment approvals were granted without the need for review by immigration officers. The AI system made positive eligibility decisions using rules from past officer decisions. IRCC found that low risk assessments from China were processed 87% faster using advanced analytics. These results are promising because an increase in efficiency can allow for quicker service. However, IRCC expressed that contextual reasoning and fraud detection remain tasks that are best suited to immigration officers.
On April 1, 2020, the Treasury Board implemented the Directive on Automated Decision Making . This policy responded to regulatory and ethical concerns. Its objective is to ensure that automated decision systems are used to reduce risks, be efficient and accurate, and to provide consistent and interpretable decisions under the law. The Algorithmic Impact Assessment is a mandatory risk assessment tool for AI designers that provides a course of action in response to their answers. This assessment includes two sets of questions for risks and mitigation. The Directive requires that an assessment be completed at the beginning of the design phase of an automated decision system project. There are four “impact assessment levels”:
- Level 1: The decision will likely have little to no impact, decisions will often lead to impacts that are reversible and brief.
- Level 2: The decision will likely have moderate impacts; decisions will often lead to impacts that are likely reversible and short-term.
- Level 3: The decision will likely have high impacts; decisions will often lead to impacts that can be difficult to reverse and are ongoing.
- Level 4: The decision will likely have very high impacts, decisions will often lead to impacts that are irreversible, and are perpetual.
These levels indicate the likelihood and degree of impact that the system is expected to have on the rights of individuals or communities, the health or well-being of individuals or communities, the economic interests of individuals, entities, or communities, and the ongoing sustainability of an ecosystem. After an impact level is determined, AI designers must follow the level-specific requirements that are assigned to their project. There are impact level requirements prescribed for peer review, notice, human involvement in the decision-making process, result explanations, training, contingency planning, and approval for system operations. Projects with higher impact levels have more onerous requirements.
Ethical Considerations for Automated Decision Systems
There are concerns that the government’s use of automated decision-making systems will infringe on individual constitutional rights. Human rights violations may occur when public institutions rely on AI for law enforcement and administrative decision-making. The Canadian Charter of Rights and Freedoms (the Charter) guarantees that:
Fundamental Freedoms
2. Everyone has the following fundamental freedoms:
(a) freedom of conscience and religion.
(b) freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication.
(c) freedom of peaceful assembly; and
(d) freedom of association.
Life, liberty and security of person
7. Everyone has the right to life, liberty and security of the person and the right not to be deprived thereof except in accordance with the principles of fundamental justice.
Search or Seizure
8. Everyone has the right to be secure against unreasonable search or seizure.
Equality Rights
15. (1) Every individual is equal before and under the law and has the right to the equal protection and equal benefit of the law without discrimination and without discrimination based on race, national or ethnic origin, colour, religion, sex, age or mental or physical disability.
Facial recognition AI uses images to create a biometric profile known as a “feature vector”. These systems have databases that contain large numbers of feature vectors that are gathered from the internet. A search involves uploading an image of which a feature vector is created and matched against feature vectors from the database. A sufficient “similarity score” between vectors determines the basis for a match. An American study found that facial recognition AI in the United States had a higher rate of false positive matches for Asian and African American persons. The group that experienced the highest rate of false positives were black females. These errors can occur because of poor image quality, aging, or similar features amongst individuals.
A police officer who acts on a lead from a false positive may violate constitutional protections. In Canada, the search or detention of an individual based on algorithmic bias would likely breach their Charter rights. There are severe immigration consequences for people who are flagged by detection-making software due to a mistaken identity. This could lead to rejected immigration applications, false allegations, and detention. Inherent biases in AI decision-making can extend beyond race, ethnicity, and sex to marginalize protected groups when other factors are used in determinations.
AI Tools for Lawyers
AI tools marketed to lawyers can be grouped into six categories: document management, document analytics and generation, e-discovery, expertise automation, legal research, and predictive analytics. A document management AI is a system that accurately reviews documents in seconds without inaccuracies that arise from human error. Similarly, document analytics and generation tools assist with drafting contracts and litigation documents. These tools use machine learning to assist with analyzing contracts, reviewing due diligence, and abstracting clauses from agreements.
E-discovery software analyzes large numbers of documents according to search criteria, it identifies relevant documents much faster than regular searches. Expertise automation commoditizes legal knowledge and finds answers to questions that would normally require meetings between clients and their lawyers. Legal research tools are being developed by publishers to provide lawyers with answers to questions of law. Predictive analytics AI speculate likely outcomes, like the results of a hearing based on information from a databank of prior decisions.
Lawyers’ Professional Responsibility
Ethical dilemmas may arise from the use of AI in legal practice. AI technologies can impact a lawyer’s duties and obligations concerning the preservation of attorney-client relationships. The Law Society of Ontario requires that legal professionals adhere to the Rules of Professional Conduct. When deciding whether to use AI tools and services, lawyers must consider their duties of professional competence and confidentiality:
Competence
3.1-1 In this rule, "competent lawyer" means a lawyer who has and applies relevant knowledge, skills and attributes in a manner appropriate to each matter undertaken on behalf of a client including…
(b) investigating facts, identifying issues, ascertaining client objectives, considering possible options, and developing and advising the client on appropriate courses of action, …
(e) performing all functions conscientiously, diligently, and in a timely and cost-effective manner.
3.1-2 A lawyer shall perform any legal services undertaken on a client's behalf to the standard of a competent lawyer.
Confidential Information
3.3-1 A lawyer at all times shall hold in strict confidence all information concerning the business and affairs of the client acquired in the course of the professional relationship and shall not divulge any such information unless
(a) expressly or impliedly authorized by the client.
(b) required by law or by order of a tribunal of competent jurisdiction to do so.
(c) required to provide the information to the Law Society; or
(d) otherwise permitted by rules 3.3-2 to 3.3-6.
When an AI system lacks appropriate security measures, confidentiality can be compromised. Client data may be targeted by cyber criminals in an “AI attack”. These attacks involve the manipulation of AI systems to change its behavior and to breach data security. AI attackers can change, damage, and steal information by exploiting inherent vulnerabilities in the algorithms. Regulatory guidelines that can mitigate the risk of attacks include considering the risk of attacks, IT-reforms to decrease system vulnerability, and response plans.
With respect to professional competence, the guideline to perform functions in “a timely and cost-effective manner" may encourage the use of AI as an element of this duty. The Rules suggest that options should be explored. In some cases, the use of an AI tool may be the best possible option to assist a lawyer to reach a client’s objective.
The Brazilian Experience
Brazil is developing AI to ease the burden on its court system. VICTOR is a tool for the Supreme Federal Court that reads extraordinary appeals and identifies their connection with general repercussions. The AI uses data from digitized documents to make its determinations. The court’s goal is to automate the textual analyses of case law. VICTOR completes tasks in five seconds that normally take half an hour. SOCRATES is AI for Brazil’s Superior Court of Justice. This system groups new cases with similar issues to be judged in blocks. It also screens unrelated cases to bar their entry to the court. SOCRATES 2 is under development; it will provide judges with the necessary elements to adjudicate a case. These elements include the description of the parties and precedent for the subject matter. The success of these initiatives may encourage other nations to follow Brazil’s lead, but concerns for algorithmic bias remain.
AI Regulation
The crux of ethical concerns for AI is that people may be subject to biased decisions, technical errors, and data theft. Transparency is a preliminary issue for the use of AI. While human decision-makers can explain their decisions, the decisions of AI systems cannot be interpreted in the same way. This communication gap makes it difficult to navigate the decisions of automated systems and it could lead to violations of procedural fairness.
The Personal Information Protection and Electronic Documents Act (PIPEDA) is the foundation of privacy protection at the federal level in Canada. It was enacted in 2001, well before the emergence of AI technologies. The PIPEDA requires legislative changes to sufficiently address developments in AI. Experts suggest that the Office of the Privacy Commissioner of Canada be granted the authority to issue financial penalties and binding orders, amongst other reforms.
The Law Commission of Ontario published the report Regulating AI: Issues and Choices, which provided suggestions to overcome legal and ethical concerns that arise from the use of sensitive data. The Commission advocates for proactive law reform to regulate AI. It recognized that the Directive on Automated Decision Making was a good start, but without a provincial regulatory framework, there are risks of under-regulation. The report suggests guidelines to structure AI regulation, including:
- Baseline requirements for all government AI, irrespective of risk.
- Strong protections for AI transparency, including disclosure of both the existence of a system and a broad range of data, tools and processes used by a system.
- Mandatory “AI Registers.”
- Mandatory, detailed, and transparent AI or algorithmic impact assessments.
- Explicit compliance with the Charter and appropriate human rights legislation.
- Data standards.
- Access to meaningful remedies.
- Mandatory auditing and evaluation requirements.
- Independent oversight of both individual systems and government use of AI and administrative decision systems generally.
There are currently no procedural fairness protections for AI systems that are used by public institutions outside of the federal jurisdiction. A framework that incorporates suggestions from this report would provide the foundation for provincial regulation. The Commission advocates for broad interactions between AI program designers and other groups, which include policymakers, legal professionals, and affected communities. Open communication creates the opportunity for equal access to information and participation in AI decision-making.
Conclusion
The pursuit of efficiency and precision drives the demand for AI services. While the potential benefits of AI are vast, the automation of administrative decision-making processes raises procedural fairness concerns. There are also professional responsibility considerations in the use of AI in legal practice. Structured regulation can preserve rights and mitigate the tension between a lawyer’s duties to their client and the use of efficient AI systems. Regardless of a lawyer’s decision to use AI, an understanding of these technologies has become essential to remain professionally competent. In the context of immigration law, it is imperative that applicants have recourse to human decision-makers to review negative decisions that can affect their lives permanently.