Ontario AI Guidance: A Roadmap for Compliance?

In January 2026, the Ontario government took a major step by publishing its Principles for the Responsible Use of Artificial Intelligence (the “Guidance”). The Guidance was jointly developed by the Ontario Privacy Commissioner and Ontario Human Rights Commission, with a stated intent of clarifying the legal obligations of public sector organizations that use Artificial Intelligence. The Guidance is also useful for private sector organizations because it reflects the approach that government authorities will likely take when defining the obligations of the private sector regarding AI use. The note sets out a summary of the Guidance and related principles of AI use.

 

How Do We Define AI?

Before getting into the principles, it is important to address how the Guidance defines AI and AI systems. Most people use the term AI to refer to large language models like ChatGPT, or to some undefined instance where a computer system produces an output. According to the Guidance, the term AI should be construed more broadly so that it also includes other kinds of software that is not usually classified as AI. For the purposes of the Guidance, spam filters, and any automatic decision-making systems are AI. The Guidance’s broad definition of AI means that the Guidance’s principles will apply to organizations’ use of many systems that usually would not be considered to be AI.

 

Principles of AI use

The Guidance outlines the following core principles associated with AI use:

 

  1. Valid and Reliable: Prior to being deployed, an AI system must be tested using independent testing standards. The testing must produce objective evidence that the AI system can fulfill its task with accuracy, and for a specified period of time. The AI system must continue to be tested for the duration of its life cycle.

  2. Safe: Employers must ensure that the AI systems they use are safe for human physical health, mental health, economic security and the environment. Before an AI system is implemented for a new task or context, it must be tested to determine whether it can be used safely in that task or context.

  3. Privacy Protective: The development of an AI system must anticipate and mitigate the risks that the AI system may pose to the privacy of relevant individuals or groups, and any associated personal information. AI systems must be designed to protect personal information from unauthorized access. AI systems must only use the bare minimum quantity of personal information necessary to achieve the AI system’s intended purpose and must incorporate measures to mitigate the potential for bias. Individuals must be informed when, how, and why their personal information is being used. Where appropriate, individuals should be given the opportunity to access and correct their personal information that is being stored or used by an AI system.

  4. Human Rights Affirming: Developers, providers, and users of AI systems must ensure that their AI tools do not infringe on the human rights of any persons who are members of a protected ground.

  5. Transparent: It must be clear when AI systems are being used, how they work, and why they are being used.

  6. Accountable: There must always be human oversight of AI systems. Organizations involved in developing, providing, or using AI must be prepared to provide plain language documentation that explains how they are ensuring their use of AI is compliant with legal requirements.

 

Next Steps for Organizations

In light of the principles outlined in the Guidance, organizations looking to be proactive in protecting themselves from legal risks connected to AI use should consider the following:  

  1. Identify each of the tools they use that fall under the definition of AI used by the Guidance;

  2. Review and determine the actual and potential risks posed by the listed AI tools;

  3. Appoint specific people to be responsible for ensuring the organization’s use of AI tools is compliant with the Guidance; and          

  4. Establish mechanisms to address questions or concerns raised by employees and clients about organizational AI use.

 By proactively establishing mechanisms to comply with the Guidance, organizations can reduce much of the uncertainty connected with their use of AI systems.

Next
Next

Blue Jays Win: Applicant’s “Bald Allegations” = No Discrimination Case