Skip to main content


My primary research interest is specifically in the areas of artificial intelligence (AI) and law, and generally in law and technology. My research work sheds light on the intersection of AI and law, from identifying and addressing legal issues arising from the adoption of new and emerging technologies in the justice system to identifying and addressing issues relating to racial bias in AI systems.

Mitigation Race, Gender and Privacy Issues in AI Facial Recognition Technology

Recent years have witnessed the increasing use of Artificial Intelligence Facial Recognition Technology (AI-FRT) in both the private and public sectors. The use of AI-FRT has been plagued by issues of privacy, as well as racial and gender bias, particularly relating to Black people and people of color.

Read More

The privacy issues arise, among others, from the collection and use of big data in the development and deployment of AI-FRT. The racial and gender issues stem from the disproportionate data from racial and gender demographics, which has resulted in the technology misidentifying or failing to identify individuals from particular gender and racial groups. Research studies have shown that AI-FRT technology is 99% accurate in identifying White male faces, while the rate for people of color, especially women of color, is in the lower range of 65%. The race, gender, and privacy impacts of AI-FRT could affect the Charter rights of Canadians, e.g., the right to personal liberty, as well as the right to freedom from discrimination on the basis of race or gender.

This research project focuses on identifying and examining the race, gender, and privacy issues primarily associated with the development of AI-FRT, and its utilization by both private and public sectors in Canada. Additionally, the research aims to develop a framework and guidelines to address the impacts on race, gender, and privacy resulting from the development and deployment of AI-FRT by private sector developers in Canada. Another key objective is to explore potential reforms to the Personal Information Protection and Electronic Documents Act (PIPEDA) to legislatively address the race, gender, and privacy impacts arising from the private sector’s development and deployment of AI-FRT.

The research is funded by a $50,000 grant from the Office of Privacy Commissioner of Canada.

AI In the Criminal Justice System

Recidivism risk assessment is a crucial component of the criminal justice system. Algorithmic tools based on artificial intelligence technology are now increasingly used at various stages of the criminal justice process, including pre-trial, sentencing, and post-sentencing.

Read More

These tools play a significant role in determining an accused or offender’s eligibility for bail, the appropriate criminal sentence, parole eligibility, and security classification while incarcerated. By relying on information about an individual’s background and utilizing big data, this technology aims to predict the risks posed by someone with the offender’s background, as well as their likelihood of reoffending.

The use of algorithmic risk assessment technology in the criminal justice system has sparked significant concerns. Firstly, the methodologies employed by these technologies to assess the likelihood of recidivism are often considered proprietary trade secrets and are not disclosed for scrutiny by the court, the accused, or the prosecution. Secondly, these technologies partly base their assessments on general factors that are similar to the accused’s background but not necessarily specific to the individual accused. Thirdly, research studies including those examining systems like COMPAS in the United States, have revealed biases in these technologies. For instance, COMPAS was found to be twice as likely to flag Black defendants as high-risk for future criminal activity, while it was also more likely to incorrectly label White defendants as low-risk. Consequently, there is substantial criticism regarding the technology’s tendency to perpetuate existing inequalities and stereotypes.

Grounded in critical race theory, this research aims to critically analyze the propensity of algorithmic risk assessment technologies to perpetuate both implicit and explicit biases, particularly against minority groups such as Black and Indigenous offenders. These groups are disproportionately represented in the Canadian criminal justice system.

The study will delve into the complexities and challenges posed by the utilization of algorithmic risk assessment within this system. Furthermore, the research intends to extend beyond mere analysis, proposing the development of a comprehensive legal framework tailored for the application of artificial intelligence technology in recidivism risk assessments in Canada.

Generative AI in Legal Practice

Advancements in artificial intelligence (AI) are revolutionizing the legal profession. AI is increasingly being utilized in various aspects of legal practice, including legal research, eDiscovery document review, and case law analysis.

Read More

One of the most notable transformations in AI usage is the emergence of generative AI, which has been popularized by systems like ChatGPT. Generative AI represents a branch of AI technology focused on creating systems capable of generating a wide range of new and original content, spanning texts, images, music, videos, and more. These generative AI systems are trained on extensive datasets, enabling them to generate novel content that closely resembles those found in their training set.

One of the most transformative impacts of AI in legal practice is its ability to automate many routine tasks traditionally performed by lawyers. Along with these benefits, however, this transformation brings a host of ethical challenges previously unseen in the legal profession. These challenges have become increasingly apparent in recent legal matters, where lawyers have used generative AI to prepare documents for legal proceedings. In some cases, these documents included fabricated cases generated by AI, underscoring the urgent need to identify and address the ethical implications that arise from the use of generative AI in law practice.

This research project aims to investigate the ethical challenges posed by Generative AI in the legal domain. These challenges include concerns about uploading confidential information into open-source AI technologies, the potential of Generative AI to create bogus cases, as well as fake audio and video documents that could be submitted in court proceedings. Additionally, it examines the risk of unauthorized practice of law through the use of chatbots for providing legal advice, among other issues. It is crucial for the profession to develop guidelines ensuring the responsible and ethical use of this technology. This research intends to develop such ethical guidelines, which will contribute to the establishment of best practices. These guidelines will enhance transparency, fairness, and accountability in the application of Generative AI within the legal profession.

This research is funded by the Ontario Bar Association Foundation Chief Justice of Ontario Fellowship in Research Award

Environmental Impact of AI Technologies

Cloud computing, commonly known as ‘the cloud,’ involves the use of interconnected network servers via the internet to store, process, access, and manage electronic data. The emergence of cloud computing technologies has led to a significant increase in the number of data centers.

Read More

These data centers host extraordinarily large amounts of data for technology-based companies such as Amazon, Facebook, Google, Netflix, YouTube, and other businesses.

Data centers are the backbone of cloud computing, and their energy footprint is particularly staggering. These centers consume a tremendous amount of electrical energy, making their operation heavily dependent on electricity generation. The daily energy consumption of a single cloud computing data center can be equivalent to that of about 65,000 homes. In 2013, data centers in the United States alone consumed approximately 91 billion kilowatt-hours of electricity, according to NRDC 2015. This accounted for about 2 percent of the country’s total energy consumption for that year. In Canada, data centers represent approximately 1% of national electricity consumption, as reported by DCD Intelligence (2013) and Natural Resources Canada (2016).

Beyond energy consumption, cloud computing data centers also have other environmental impacts. These include significant water usage, pollution from backup generators, and pollution resulting from the exploitation of natural resources used in the production of cloud computing hardware. Furthermore, there are environmental concerns associated with the disposal of this hardware at the end of its life. The low cost of energy and the cold climate in Canada continue to attract large cloud computing data centers, bringing along their environmental burdens.

The environmental impacts of cloud computing technologies are substantial, yet they have surprisingly not been the subject of extensive research in the existing literature. This research aims to thoroughly examine the various environmental impacts associated with cloud computing technologies. It seeks to answer critical questions such as: What is the actual environmental cost of cloud computing technologies? How do the operations of data centers impact electricity generation and distribution, as well as water laws and policies, both in Canada and globally? What policy framework needs to be developed to ensure environmentally sound deployment and operation of cloud computing data centers in Canada?

Unauthorized Practice of Law in the Age of Artificial Intelligence

The rapid growth of artificial intelligence technology has led to its application across a wide array of human endeavors, including the legal sector. AI technology is increasingly being utilized to provide various legal services, tasks that were traditionally the exclusive domain of lawyers.

Read More

This surge in AI’s use for legal services is challenging the conventional practice of law, which has been predominantly reliant on human practitioners. Consequently, this shift has sparked regulatory concerns related to the unauthorized practice of law. Traditional regulations governing legal practice are now being questioned, as unlicensed individuals and corporations leverage AI technology to provide legal services more effectively and cost-efficiently. This research, which is still in its early developmental stages, aims to examine the regulatory issues emerging from the increasing use of artificial intelligence technology in the provision of automated legal services.

Selected Research Papers

Gideon Christian, “The New Jim Crow: Unmasking Racial Bias in AI Facial Recognition Technology within the Canadian Immigration System” (Forthcoming in McGill Law Journal)

Gideon Christian, “Legal Framework for The Use of Artificial Intelligence (AI) Technology in the Canadian Criminal Justice System.” Keynote Paper, From Inequality to Justice: Law and Ethics of AI & Technology Conference, Schulich School of Law, Dalhousie University, Halifax, Nova Scotia, June 16, 2023. Forthcoming Spring 2024, Canadian Journal of Law and Technology (Special Edition).

Gideon Christian, “Coded Bias: Decoding Racism in Artificial Intelligence Technologies.” Keynote Paper, Coding, Computational Modelling, & Equity in Mathematics Education Symposium, Brock University, St. Catherines, Ontario, April 28, 2023

Gideon Christian, “A “Century” Overdue – Revisiting the Doctrine of Spoliation in the Age of Electronic Documents” (2022) 59:4 Alberta Law Review 901 – 918

Gideon Christian, “Predictive Coding: Adopting and Adapting Artificial Intelligence in Civil Litigation” (2019) 97:3 Canadian Bar Review 1 – 40

Gideon Christian, “Ethical and Legal Issues in eDiscovery of Facebook Evidence in Civil Litigation” (2017) 15 Canadian Journal of Law and Technology, 335.

Gideon Christian, “A New Approach to Data Security Breaches” (2009) Canadian Journal of Law and Technology, Vol. 7, No. 1, p. 149,