Artificial intelligence (AI) has infiltrated nearly every aspect of our daily lives. From self-driving cars on Ontario roads to AI-driven medical devices in Toronto hospitals, this technology is embedded in our lives. To such an extent that Toronto is among the top 10 North American cities in terms of ChatGPT usage. While AI promises efficiency and progress, it also raises a pressing legal question – who is responsible when AI causes personal injury?
In traditional personal injury cases, liability typically rests with individuals, such as car drivers, employers, landlords, and property owners. But as AI becomes more commonplace and autonomous, assigning responsibility becomes increasingly complicated. If a self-driving car crashes or a medical AI system misdiagnoses a patient, should liability be transferred to the manufacturer, the programmer, the operator, or the AI itself? What if a person is seriously injured from cooking instructions from ChatGPT? What if an autonomous floor cleaning device provides the wrong amount of floor polish, resulting in a slippery floor and an individual suffering a slip and fall injury? Or what if an autonomous smart lawnmower accidentally runs over a user’s foot and cuts off a toe?
This isn’t just hypothetical. Across Canada and around the world, courts are grappling with these issues. And every day, more potential scenarios continue to develop.
The Growth of AI in Everyday Life
AI has exploded from labs into the real world, permeating every part of day-to-day life:
Self-Driving Cars: Since 2016, Ontario has permitted the testing of autonomous vehicles. Magna Canada is currently testing self-driving cars on the streets of downtown Toronto, and Waymo , Tesla, and other self-driving taxi services are expected to expand into various cities across Canada in the near future.
Healthcare AI: Toronto’s leading hospitals, like Sunnybrook, Toronto General Hospital, Princess Margaret Hospital, and Mount Sinai Hospital, now use AI-driven tools in radiology, diagnostics, and patient monitoring.
Workplace AI: Numerous warehouses across the GTA, like Amazon and Walmart, use AI-controlled robots to transport items, stack shelves, schedule shifts, and interact with employees. These AI-powered robots and systems can perform tasks more quickly and efficiently than humans.
Consumer AI: Everyday, commonly used consumer products, like voice assistants, robotic vacuums, delivery drones, and autonomous vehicles, utilize AI to operate, aiming to perform tasks automatically and more effectively than those done by humans. Some examples of these devices and services are iRobot’s Roomba, Husqvarna Automower, and ChatGPT.
As artificial intelligence systems and AI-enabled products continue to replace roles formerly performed by humans and modify daily job functions, personal injury law will continue to be tested in new ways.
A Brief Overview of Traditional Personal Injury Law
Personal injury cases generally require proving a duty of care (the defendant had a responsibility to act with care), a breach of that duty of care (the defendant failed to meet that responsibility), causation (that failure directly caused harm to an individual), and damages (measurable loss like medical costs, lost income, or pain and suffering).
When the bad actor is an AI system, the path to liability becomes more complex and is subject to debate.
Why Does Artificial Intelligence Complicate Liability?
Artificial Intelligence introduces three main challenges that need to be factored into every liability claim.
Autonomy: Unlike simple machines, AI can continue to learn and adapt independently, going far beyond how its designers initially anticipated.
Shared Responsibility: In the event of an accident caused by AI, multiple parties may be responsible, like the software developer, the manufacturer, the owner, and the operator.
Opacity: The lack of transparency in how artificial intelligence systems arrive at their decisions or predictions further complicates factoring liability. This is especially true for those powered by deep learning and large language models. With traditional software, coding is performed by humans, and every rule yields specific results, making it easy to trace the logic step by step and see how outcomes are formed. With modern artificial intelligence systems, decision-making occurs within complex networks comprising countless parameters. These systems are continuously learning from data, adjusting their internal connections in ways that are not easily interpretable or predictive, even by their creators.
What Are Some Real-World Toronto-based Scenarios of AI Personal Injury Incidents?
Self-Driving Cars on Toronto Roads
Picture a passenger sitting in the driver’s seat of a self-driving car traveling along the 401. The passenger sits with their hands off the wheel, not focused on the road, as the vehicle begins to move from the Collector Lane to the Express. It makes a miscalculation and slams into another car. Who is responsible for the accident? The passenger? The manufacturer of the car? The AI software in the vehicle?
Currently, Ontario courts would likely hold the passenger liable, as they had a duty to be alert and able to take over driving the car if necessary while in self-driving mode. There may also be an element of product liability if the AI system malfunctions.
AI in Toronto Healthcare
An individual goes to a Toronto hospital for a health scan, where an AI program fails to detect early cancer. With the lack of detection, treatment is delayed, resulting in the patient’s condition worsening. Here, responsibility may extend to the manufacturer for a flawed product, or to the radiologists and the hospital if they relied too heavily on the artificial intelligence without adequate expert oversight.
AI in Toronto Workplaces
In a large Barrie storage facility, an issue on the floor causes the AI-powered robot used to move items to malfunction, striking and seriously injuring a worker. Liability could rest with the employer (under workplace safety laws, as the accident occurred in a work environment), the equipment manufacturer, or the third-party service provider responsible for maintaining the robot.
AI in Toronto Homes
A homeowner owns a smart lawnmower that is used to mow their front lawn. As the lawnmower is operating within a geo-fenced area of the yard, it glitches and stops. When the user investigates, the lawnmower starts up again, resulting in it cutting off one of the homeowner’s fingers.
Statistics Show the Risks
While still emerging, data highlights the stakes:
Human error is a factor in 94% of serious car accidents due to distracted driving and other behaviors. In theory, autonomous smart vehicles that eliminate most human driving decisions could reduce crashes by a similar percentage, resulting in safer roads, fewer injuries, and less personal injury claims. This would mean safer streets for drivers, passengers, and pedestrians throughout Toronto.
RAND Corporation studies found that even if autonomous vehicles are only 10% safer than human drivers, tens of thousands of injuries and thousands of lives could be saved annually. Autonomous cars will operate more safely in all weather conditions and may have the potential to communicate with each other, creating safe buffer zones, further enhancing safety levels.
The Canadian Medical Protective Association has warned doctors that relying on AI diagnostic tools without follow-up additional screenings may lead to false diagnoses and create new liability risks.
Globally, the World Health Organization notes that 1 in 10 hospital patients suffers harm and poorly monitored artificial intelligence tools could increase that risk.
Down the Rabbit Hole: AI, Chatbots, and Self-Harm
A newer, more sensitive question involves artificial intelligence systems like ChatGPT, Google Gemini, DeepSeek, and Claude, and whether they could be liable for harm caused by their outputs.
Imagine a Toronto-based teenager who is a ChatGPT user, is confused and stressed, and is seeking health or mental health advice from the service. As the teen confides more to the chatbot, the conversation and advice become increasingly darker. If ChatGPT gives harmful or reckless recommendations that contribute to self-harm, should liability exist? Are the programmers liable? What about the owner of the service who didn’t safeguard it with proper flagging?
What if the Chatbot advises on the best knot to use in a hanging death? Or the bot tells the teenager to “keep things only between each other”, as opposed to seeking in-person help, which could have ultimately saved his life?
There have been some high-profile cases that have already raised alarms. Recently, Matt and Maria Raine are suing ChatGPT for contributing to and assisting in the suicide of their son, Adam. In Canada, although no lawsuits have yet tested this, questions persist about whether tech companies could be held responsible for negligent design or inadequate security measures.
With Artificial Intelligence, potential avenues of liability could include:
Negligent Design – If AI developers fail to incorporate reasonable safeguards, like the inability to block harmful prompts or report/flag conversations that lead down potentially harmful paths, they may be liable in the event of someone harming themselves.
Failure to Warn – If users are not adequately warned about the unpredictable nature of AI behaviour and the potential risks associated with engaging with AI chatbots, companies may face liability in the event of accidents.
Product Liability – Courts may begin to consider AI chatbots as “products,” with liability for defects that cause foreseeable harm.
What Are the Challenges in Proving Responsibility?
As ChatGPT and other services become increasingly commonplace, new challenges continue to emerge. The most pressing question is, how do you prove causation between an AI’s output and someone’s self-harm? Did the AI encourage the action, or did it fail to provide proper safeguards? And what about freedom of speech protections for tech companies? Where does the fault lie?
These questions remain unclear, but as artificial intelligence becomes more widely used and integrated into all parts of our daily lives, including mental health apps and legal advice, expect courts to be compelled to confront them.
Implications for Ontario Victims
For Ontarians, the rise of AI has two key implications:
- New evidence opportunities: Artificial intelligence often logs data (chat logs, usage data) that can serve as robust evidence in lawsuits. Some examples of this are sensor logs from self-driving cars or chatbot conversation histories.
- Greater complexity: Proving liability becomes more complicated as it may require technical experts in computer science, data ethics, and engineering alongside traditional medical experts.
How Can a Toronto Personal Injury Lawyer Help with Cases Involving AI?
Toronto personal injury lawyers will play an essential role in AI-related cases. They will be advocates for anyone seriously injured and will be able to fight on their behalf. Toronto personal injury lawyers will help by identifying liable parties (whether human or corporate), retaining expert witnesses to explain how these artificial intelligence systems failed, challenging insurers who may argue that AI-related harms are unforeseeable and therefore not responsible for negligible actions, and ensuring victims receive compensation for medical costs, lost income, pain and suffering, and long-term care.
Bergel Magence is at the forefront of helping seriously injured individuals get the justice and compensation they deserve after an accident. We continue to research artificial intelligence and personal injury, and can help anyone who may have been injured in an accident caused by AI.
What Is Expected with AI and the Future?
Toronto is a hub for AI-related companies and think tanks, including the Vector Institute for Artificial Intelligence and numerous AI startups. With Toronto at the forefront of the AI boom, Ontario courts, regulators, and lawyers will need to strike a balance between innovation and accountability. AI may be complex, but the principle remains the same: if a person is injured due to negligence, whether by a human or through an AI system, they are entitled to compensation.
AI and Accountability – The Need for Toronto to Adapt
Artificial intelligence has immense potential to assist humans in numerous ways, but it also poses a significant risk of injury. From self-driving cars to medical tools to conversational informational systems like ChatGPT and Claude, AI is already influencing health, safety, and daily decisions.
The big question is not whether AI should be held accountable, as it cannot be since it’s not a legal person, but whether the companies and individuals behind creating and managing the AI systems should bear responsibility when negligence or defective design causes harm.
As Ontario courts and lawyers continue to grapple with the implications of AI and personal injury law, more accidents are likely to occur. If you or a loved one has been hurt in an accident caused by artificial intelligence or a situation where AI played a role, contact AI personal injury lawyers at Bergel Magence today. We can figure out where the AI negligence lies and help you and your family get your life back on track. We will fight on your behalf, allowing you to focus on your rehabilitation. With over 50 years of experience representing seriously injured individuals, our award-winning law firm will provide exceptional service and support for you.
Contact us today at 416-665-2000 or visit us online at www.bergellaw.com.