Articles

Articles

The role of AI in the immigration space

Facebook
Twitter
LinkedIn

This article was authored by Randolph Hahn, partner with Garson Immigration Law in Toronto.

 

In recent months there has been much attention to the implications of the proliferation of the new iterations of artificial intelligence (“AI”) and how it will change our world.

According to the bestselling historian Yuval Harari, a co-author of a recent in a recent op ed in the New York Times, “We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization.”

One of the pioneers of AI (a Torontonian), sometimes called “the Godfather of AI” recently quit his job at Google so he could speak out about the risks of AI.

Recently a group of industry leaders (including top executives at AI companies) signed a statement saying that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

But notwithstanding the concerns and uncertainties (to put it mildly), the adaptation of AI to our daily lives continues apace. This includes its use in the legal world. For example, a recent submission to House of Commons Standing Committee on Industry and Technology mentions “robot judges” in Estonia for small claims matters.

Not surprisingly many have begun to wonder how AI may or may not affect their own circumstances, including how they earn their livelihoods.

My CILA colleague Nicolas Simard-Lafontaine recently had an interview with Chat GPT, a conversation he (Nicolas) described as “one of the most fascinating and insightful conversations I’ve had this year.”

So, what will all this mean for Canadian immigration law?

For Canadian immigration applications there are three broad touchpoints: the applicant, the counse l who assists with the application, and the decision maker who adjudicates the application.

As for AI and applicants (or what to do when your client is a robot) – well that is a discussion for another day.

As for counsel, many Canadian immigration lawyers are exploring how to use AI in their practices. Of course, they need to be careful. One lawyer unhappily discovered that when AI is relied on for legal research and drafting, it can sometimes simply make stuff up (including providing assurances it is not making stuff up).

So that leaves how AI is or is not being used by a decision maker.

Immigration, Refugees, Citizenship Canada (“IRCC”) is using a system, Chinook, which it describes as a Microsoft Excel-based tool developed by IRCC to increase efficiency for the processing of some applications. IRCC asserts that Chinook does not rely on AI nor advanced analytics for decision making and that officers are the ones that make final decisions. IRCC assures us that Chinook “…does not fundamentally alter the decision-making process.”

More recently IRCC has allowed that some approval decisions for some electronic Travel Authorizations (eTAs) and routine passport renewal applications are fully automated.

The courts are now beginning to address the issues related to AI in immigration decision making though it is early days.

Last year in Ocran v. The Minister of Citizenship and Immigration  the Federal Court considered a case challenging the refusal of a study permit. The visa officer had refused the application based on travel history, family ties, the purpose of the visit, and personal assets and financial position.

The Court dismissed the application for judicial review and decided that under the applicable law the decision of the visa officer could stand.

At the end of the decision, however, the Court acknowledged an issue that the Respondent, IRCC, had asked to be resolved: whether the Global Case Management System (“GCMS”) notes provided (that were provided as part of the reasons) were deficient because spreadsheets were not retained and not included in the Certified Tribunal Record. The Court declined to do so as the issue (so it explained) was not in direct dispute between the parties.

It is true that the applicant was not taking issue with the measures and models that the visa officer adopted in the review of the application.

But the Respondent IRCC had indicated in a memorandum that “An issue…unrelated to the substantive merits, is also raised in this matter. The Officer used a software-based tool to assist in processing the application. This tool creates a spreadsheet report to enable an officer, through the streamlining of administrative processes, to more efficiently view and process multiple applications. The spreadsheet is not retained once processing is completed and is, therefore, not produced in the Certified Tribunal Record (“CTR”). However, the information from the spreadsheet has been retained and presented in the CTR in another format. As such, the CTR is not deficient.”

In a supplementary memorandum the Respondent referenced its duty of candor to the Court and explained how Chinook was used to assist in processing the Applicant’s application. It explained that there are risk factors and word flags “that are presented to decision-makers if applicable.”

But in response to the Respondent’s affidavit, the Applicant did argue that “The Chinook Model used to assess the Applicant’s application smacks of a tool that was hurriedly put in place to principally reduce the time within which applications are processed without a commensurate need to ensure that such applications are assessed on their merit.”

However, given IRCC’s assurances that officers are the ones who are making the final decision and Chinook does not fundamentally alter the decision-making process, it might be expected that IRCC would simply say nothing to see here folks, let’s move along so we can continue to process more applications, more efficiently in less time.

Well maybe we should hold on for a moment.

As outlined in a presentation last year by Mario Bellissimo to the House of Commons Standing Committee on Citizenship and Immigration IRCC has been using AI to triage the intake of temporary visa applications based on eligibility. The triaging is based on the complexity of the applications; there are “rules” developed by senior officers that determine the complexity though the rules are confidential and not shared with the deciding officers or the public. There is a concern that there may be bias (even if only unintentional) in the tools developed and such bias could be embedded in the tools developed.

A recent presentation by Mario Bellissimo to the House of Commons Standing Committee on Industry and Technology references reports that acknowledge that technology do sort applications by eligibility.

All of which brings us to the recent case of Haghshenas v Canada (Citizenship and Immigration). The case involved judicial review of the decision of an officer at the visa post in Ankara who had refused a work permit application as the officer was not satisfied that the Applicant would leave Canada at the end of his authorized stay. According to the decision, the Applicant submitted that the decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software.

Brown, J. explained that:

[24] As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.

[…]

[28] Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov.

So according to Brown, J. since an officer made the decision whether or not artificial intelligence was used is neither here nor there since this “would elevate process over substance.”

It’s not clear why Brown, J. would be so dismissive of elevating process over substance. Even a cursory review of jurisprudence in immigration law reveals how process is as fundamental to decision making as is substance.

In any event though we may still be in early days of understanding and responding to how AI is used in immigration decision making, it is folly to presume that AI (or Chinook even if one accepts IRCC’s assertion it is not technically AI) is simply some neutral tool which is entirely controlled by a human decision maker. This is a triumph of hope over experience.

There will be more cases.

Immigration lawyers will need to think long and hard about if and how to challenge decisions in court when AI is relied upon.

And courts will need to give serious attention to grappling with the new realities that are now a part of immigration decision making.

Become a member now!

Join a growing community of Canadian immigration lawyers, academics and law students.

Our Latest Articles