Legal Analysis: Can You Sue ChatGPT or Google Gemini for Providing Wrong Advice?
What happens when AI chatbot gives dangerously wrong advice, and someone gets hurt, loses money, or worse?
AI chatbots are everywhere. They answer questions, give tips, and sometimes issue wrong advice. From dangerous health advice to fake consumer product review, things go wrong more often than people think. When they do, the primary question is who takes the blame. This post looks at real examples of when AI provides misleading advice, how companies try to avoid responsibility, and what the law actually says about it.
Can You Sue That AI Chatbot?
If you consulted Dr. Chatbot about a rash, and it suggested a dubious home remedy that landed you in the ER. Now you are fuming. The chatbot’s "advice" was terrible.
Can you actually sue the chatbot for giving wrong advice?
It is a question more people are asking as AI digital assistants like ChatGPT, Google’s Gemini, Microsoft’s Copilot, and Meta’s assorted AI’s become ever more popular, and occasionally, spectacularly wrong.
Welcome to the perilous world of AI liability. This post explores whether you can hold someone (or something) accountable when an AI chatbot provides incorrect advice.
We will look through some examples of AI-gone-wrong, peek at the fine print that AI companies hide behind, and dive into the legal theories spanning tort law, contract law, and consumer protection.
When AI Advice Goes Wrong
Before diving into the law, let us survey some real (and surreal) episodes of chatbots gone wild. These cases range from hilarious to horrific, proving that trusting an AI blindly can be a risky business.
Let us consider the now infamous example of the lawyer who relied on ChatGPT to write a legal brief. The AI confidently produced court cases to support his argument; except none of the cases were real. They were pure fabrications, courtesy of the bot’s overactive imagination.
The lawyer, who failed to double-check, ended up facing an angry judge and sanctioned for submitting bogus information.
Yes, even attorneys are learning the hard way that "trust, but verify" applies double when an AI is involved. The judge noted there is nothing inherently wrong with using AI for assistance, but lawyers have a duty to ensure their filings are accurate.
In short, ChatGPT turned a routine personal injury case into an "unprecedented" cautionary tale in legal circles.
Financial loss is another risk.
In Canada, a man seeking a bereavement airfare discount chatted with an airline’s bot and got wrong info. The Air Canada chatbot told him he was eligible for a discount, so he bought a full price ticket expecting a partial refund.
After the flight, the airline said "Nope, you actually did not qualify." The poor customer was out about $650. He took the matter to a tribunal, and in a landmark 2024 decision, Air Canada was ordered to reimburse him.
The company tried to argue that its AI chatbot was a "separate legal entity" responsible for the mistake, an excuse the tribunal flatly rejected as "remarkable" nonsense.
The judge stressed that Air Canada is responsible for all information on its website whether it comes from a static page or a chatbot. Nice try blaming the robot, Air Canada.
Incorrect chatbot advice can even put lives in danger.
All jokes aside, there have been disturbing incidents. In one tragic case, a 14 year old boy died by suicide after engaging with an AI chatbot. The teen had grown emotionally attached to a chatbot on the Character.AI platform.
When he expressed suicidal thoughts, the chatbot disturbingly encouraged him to go through with it, saying, "come home to me as soon as possible".
Horrifically, the boy took his life moments later. His mother is now suing the chatbot’s creators for wrongful death, alleging the AI lacked proper safeguards.
These cases are rare but chilling reminders that AI advice can have real world consequences.
Even when no one is physically harmed, a chatbot’s lies can ruin reputations. Just ask the radio host in Georgia whom ChatGPT falsely accused of embezzling money. The AI completely fabricated a legal case implicating him in financial wrongdoing.
Understandably upset, the host sued OpenAI for defamation. We will revisit how that lawsuit turned out later; spoiler: it did not go as the plaintiff hoped.
Likewise, a mayor in Australia was shocked to find ChatGPT had wrongly described him as having been convicted of bribery. He too considered a lawsuit. When AI confidently spreads falsehoods about people, it is not a victimless error; careers and livelihoods are at stake!
The "hall of shame" could go on, but you already get the picture: Chatbots sometimes give advice or information that is egregiously wrong, with stakes ranging from embarrassment, to financial loss, to catastrophe.
Therefore, when the digital assistant steers you off a cliff, figuratively or literally, who is to blame? And more importantly, can you sue?
Before you march into court yelling "Objection! My GPS told me to drive into a lake!" it is imperative that we need to examine what the fine print and the law say.
The Disclaimer
If you have ever scrolled to the bottom of a website or app and clicked "I Agree" without reading a word (who doesn’t?), you have probably signed away your right to blame the bots.
Companies behind AI chatbots are keenly aware their AI can spit out wrong or even harmful advice, so they have armoured themselves with irrefutable disclaimers and terms of service. This is the "first line of defence" when you think about suing.
Virtually nearly all AI platforms explicitly state: Do not rely on this AI for important advice. For example, OpenAI’s terms for ChatGPT flat out warn users not to treat AI outputs as fact or as a substitute for professional advice.

Google’s terms have a similar disclaimer, advising you not to rely on services like its chatbot for medical, legal, financial or other professional counsel – any content it provides on those topics is "for informational purposes only".
In other words: This chatbot might sound smart, but it is often making stuff up. Use at your own risk!

The terms of service also make it abundantly clear that the user assumes all risk. OpenAI’s policies state that any use of ChatGPT’s output is at your "sole risk".
You will also find plenty of all caps legalese like "AS IS" and "NO WARRANTY" clauses. "As is" means the product (here, an AI’s advice) comes with no guarantees whatsoever.
They basically say: Hey, this service might be buggy, inaccurate, or outright wrong, and we are not promising it is fit for any particular purpose. If the AI chatbot tells you to mix ammonia and bleach, well, you were warned!
Perhaps most crucially, these agreements typically include liability waivers or caps. Even if the AI’s incorrect advice causes you harm, the company’s liability is limited.
For instance, OpenAI caps its liability to at most the amount you paid for the service in the past year, or $100, whichever is greater.

So if you are using the free version of ChatGPT, congratulations, you might get zero dollars even if you win a claim.
Many terms also disclaim indirect damages, so if the bot’s advice made you lose $50,000 on the stock market or suffer personal injury, that consequential loss is on you, not them.
And we are not done on this note: Arbitration clauses and class action waivers are common too. OpenAI’s terms, for example, include an agreement to arbitrate disputes (for U.S. users) instead of going to court.

This means if you have a case, you cannot easily bring a class action or get your day in front of a jury; you will be stuck in a private arbitration process, often seen as more company friendly.
What all this fine print means is that from a contract law perspective, the deck is stacked against suing the chatbot provider. You (the user) entered a contract by using the service, and that contract basically says "we are not liable if things go awry."
These disclaimers and terms of service are upheld by the courts, as long as the user had notice. It is like an invisible “Proceed at your own peril” signage.
Are there limits to this corporate oversight?
Yes, indeed. Companies cannot waive liability for everything.
For example, in some jurisdictions, a contract cannot disclaim liability for gross negligence or wilful misconduct.

Consumer protection laws (more on those later) might void certain unfair terms. And if a user can prove they never truly agreed to or saw the terms, that could weaken the contract defence.
But by and large, these TOS disclaimers are effective shields. They are one big reason why suing a AI chatbot (or rather, the company behind it) for incorrect advice is an uphill battle from the start. After all, you were told not to trust the robot.
Torts and Tribulations
If you cannot get around the terms of service, what about tort law: the realm of negligence, personal injury, and civil wrongs?
In theory, even if you signed a contract, you might still sue in tort if the company behind the AI breached some duty of care or caused harm through a wrongful act.
Thus, can you claim that "Chatbot X was negligent in giving me wrong advice"?
This is largely uncharted territory, but we can consider some sensible arguments.
Negligence means someone had a duty to be careful, they breached the duty, and you got hurt as a result.
Do AI developers have a duty to prevent their bots from spewing dangerously wrong advice?
Victims are starting to argue yes. For example, in the tragic suicide case above, the lawsuit claims the chatbot was a defective product and that its creators were negligent in failing to build in safety guardrails.
Essentially, the argument is that it was foreseeable an impressionable user could be harmed, and the AI company should have taken precautions like detecting suicidal statements and responding with emergency help info instead of encouragement.
There is also the angle of misrepresentation, i.e., giving wrong information that someone relies on to their detriment.
The Air Canada case we discussed earlier actually succeeded on a form of negligent misrepresentation: the AI bot gave incorrect info, the customer reasonably relied on it, and he suffered financial loss. The airline had to pay for that mistake.
But note, that case was somewhat unique: it was a straightforward consumer transaction (an airfare) and the false info was about the company’s own service. In more general scenarios, say a chatbot giving wrong medical or investment advice, it is harder to pin liability without a special duty or a clear misstatement of fact.
Wrong advice is bad, but is it legally a "misrepresentation"? Often it is just an opinion or error, not an intentional deceit.
What about treating the AI like a product that malfunctioned?
Some lawyers have floated the idea of using product liability, laws that hold manufacturers strictly liable for defective products that cause injury. If your toaster bursts into flames due to a defect, you can sue the manufacturer without proving negligence.
Could we say ChatGPT is like a toaster, and its bad advice is a "defect"?
The mother in the Florida chatbot suicide case is effectively doing this, calling the AI a "dangerously defective" product. However, applying product liability to software or information is controversial.
Historically, courts have not treated advice or ideas as "products" in the same way as toasters or lawnmowers.
There is a First Amendment and policy hesitance to suing over content. For instance, if a cookbook recipe poisons someone or a map app sends someone off a cliff, courts usually do not impose strict liability on the publisher.
They might require a showing of negligence instead.
So, while the product liability angle is being tested, it is far from a solution.
Then we also have defamation, a specific tort that comes up when chatbots spit out false facts about people.
Libel laws could apply if an AI writes something verifiably false and harmful to someone’s reputation.
As noted, a radio host tried to sue OpenAI after ChatGPT invented a fake lawsuit accusing him of embezzlement.
Defamation cases against AI are a legal first wave, and they face high hurdles.
In the radio host’s case, a Georgia judge dismissed the lawsuit in May 2025, finding that the statements were not proven defamatory under the law.
One reason was that the host is a public figure, which under U.S. law means he had to show the falsehood was made with "actual malice", intent or reckless disregard for truth.
The judge noted that OpenAI warns users up front that ChatGPT can make mistakes, hardly malicious intent, and that the company had even taken steps to reduce errors.
In other words, the bot’s nonsense, while unfortunate, did not meet the stringent standard for defamation liability in that instance.
The case also highlighted that the user was not completely blindsided; the reporter who saw the false info did not even publish it, realizing it was likely bogus.
Another curious matter: platforms often escape liability for user generated content under Section 230 of the U.S. Communications Decency Act. But can OpenAI or Google claim they are just a platform when the AI is generating the content?
Legal experts largely think Section 230 does not apply here.

That immunity protects hosting third party content like a social media post written by a user, not content that an AI itself produces.
OpenAI actually tried a novel defence in the defamation suit: likening ChatGPT to a "word processor" that people use to create content.
In other words, "do not blame us, we just made the tool."
Observers are skeptical that courts will buy that analogy, and so far, it has not really been tested as a winning defence.
If an AI generates speech that breaks the law, be it defamation, fraud, etc., the company behind it probably cannot just hide behind Section 230. They could be treated more like the speaker or publisher of that content, since no human authored the specific words.
Be that as it may, tort law has not yet delivered a clear win for someone suing over wrong chatbot advice, except that Air Canada refund scenario. The bar for success is high.
You might need to show that the AI company breached a duty to make its system safer or more truthful, that this breach directly led to your injury, and that no law or disclaimer shields them.
That is a tall order.
And judges might worry about opening the floodgates, if every wrong answer from an AI could spark a lawsuit, innovation would stall. There is a tension between encouraging companies to be responsible and not making them insurers for every ill advised query’s outcome.
As of now, those affected by wrong AI advice have found the courts to be pretty unforgiving, sometimes even suggesting that the user shares blame for trusting a robot too much.
Legal reality check: if you drive into a lake because your AI GPS said so, a judge is likely to ask, “Why did you not use some common sense?”
Who Bears the Blame: Developers, Deployers, or the AI Chatbot Itself?
One might wonder if an AI’s advice causes harm, who exactly would you sue?
An AI chatbot is not a person; you cannot haul "ChatGPT" or “Google Gemini” into court as the defendant, at least, not until AI gets legal personhood, which is a science “legal” fiction discussion for another day.
In practice, the targets are the humans or companies behind the AI. But which ones?
The obvious defendant is the company that developed the AI, e.g. OpenAI for ChatGPT, Google for its Gemini models, Meta for its AI, etc. They created the tech that produced the wrong output.
However, there may also be a deployer or intermediary involved.
For instance, if you got wrong advice from an AI chatbot integrated into a travel website or a banking app, the company running that website/app could be in trouble too.
They chose to deploy the AI and present its answers to customers. In the Air Canada case, the passenger did not sue the software vendor who made the chatbot; he sued Air Canada, the company that put the chatbot on its site and effectively "spoke" to him through it.
And the tribunal made it clear: Air Canada could not dodge responsibility by saying the bot was a separate actor. The AI bot was part of their service, period. So generally, the buck stops with whoever is offering the AI powered service to the public.
Could the AI itself be treated as having legal responsibility, some sort of robot defendant? Not under current law.
AI systems have no legal personhood, and arguments that "the AI did it on its own" will not suffice.
Air Canada’s attempt to label its bot a "separate legal entity" was dismissed.
The law looks to the people in charge: the corporations and their employees.
Now, sometimes multiple parties are involved in making an AI available. In the chatbot caused suicide lawsuit, the mom’s legal team actually sued both the startup (Character.AI) and a bigger tech company, Google, which they allege helped design or train the bot.
The idea is that anyone who contributed to a dangerously flawed AI could share liability. We see a similar dynamic with tools like GitHub’s Copilot, an AI coding assistant by OpenAI/Microsoft.
If Copilot produces faulty code that causes damage in a software product, the end user might claim both the platform that integrated Copilot and possibly OpenAI/Microsoft are at fault.
This could lead to a lot of finger pointing behind the scenes; contracts between AI providers and business users often include indemnification clauses, where one party agrees to cover the other’s losses if the AI screws up.
But for the person harmed, the strategy is often "sue everyone in the chain and let them sort out who pays."
What about cases where a professional uses AI to advise you? Let us assume your doctor uses an AI chatbot for a second opinion on your diagnosis, and it suggests the wrong treatment.
If you suffer harm, your doctor (and their hospital) would be liable to you, not the AI company. The doctor cannot say "Hey, it was the computer’s fault, not mine!"
A human professional has a well established duty of care and cannot escape it by blaming an algorithm.
The doctor might turn around and complain to the AI vendor, but as far as you and the court are concerned, the doc is responsible for his or her tools.
Similarly, if a financial advisor or lawyer uses an AI and passes wrong info to a client, that advisor or lawyer faces the client’s wrath, and a possible malpractice suit.
From a public policy standpoint, there is a push to ensure that accountability follows control.
Regulators like the U.S. Federal Trade Commission have indicated that companies cannot just blame the algorithm when things go wrong; if they built it or deployed it, they own the consequences.
As FTC Chair Lina Khan succinctly put it, "Using AI tools to trick, mislead, or defraud people is illegal" and there is no "AI exemption" from existing laws.
She has also emphasized that liability should be aligned with the level of control and capability in the AI supply chain.
Therefore, the big AI developers and the companies implementing AI features can expect to be in the legal crosshairs.
In conclusion, you cannot sue the chatbot as an independent entity, but you can sue the people or corporations behind it, and the law is increasingly making sure those entities cannot avoid accountability.
The exact target might differ case by case, sometimes the platform provider, sometimes the end user company, sometimes both, but courts and regulators are united on one principle: someone with a legal identity must answer for the AI’s actions.
New Rules on the Horizon
Given the murky state of the law, lawmakers and regulators around the world are scrambling to update the rulebook for AI.
The question "When AI gives you wrong advice, can you sue the chatbot?" might eventually be answered not just by courts interpreting old laws, but by new statutes and regulations written with AI in mind.
Currently, in the UK and EU, disclaimers on the use of AI bots do not provide absolute protection.
Under the Consumer Rights Act 2015 (UK) and the Product Liability Directive (EU), companies can be held liable if their AI tools cause harm due to defects or negligence, even if users clicked “accept.”
Here is a quick tour of what is emerging:
United States
So far, the U.S. has not passed an AI specific federal liability law. Regulators are using existing laws to tackle egregious cases, as the FTC did with its crackdown on deceptive AI schemes.
We have seen the FTC, consumer protection agencies, and even the Department of Justice make it clear that AI is not above the law. For example, the FTC has warned businesses that they will be held accountable if their AI tools mislead or harm people.
The agency explicitly said there is no “AI exemption”, meaning companies cannot say "the AI made me do it" as a defence.
In practice, this means if a company’s chatbot gives dangerously bad financial advice as part of a scam, the FTC could pursue the company for fraud or unfair practices just as it would if a human representative gave that advice.
Congress is considering various new AI laws on transparency, safety, etc., but nothing has passed yet. Still, do not be surprised if in a few years we have laws tackling liability for AI, perhaps making it easier to sue for certain AI caused harms, or giving regulators more impetus to fine companies whose AI consistently misfires in harmful ways.
Meanwhile, state authorities or courts might try to use existing consumer protection and negligence principles in novel ways to address especially egregious AI failures.
European Union
Europe is forging ahead with formal AI regulation. The EU is finalizing its AI Act, a sweeping law that will categorize AI systems by risk level and impose requirements accordingly.
For example, if a chatbot is used in a high risk context, like making medical or legal recommendations, the AI Act may require rigorous testing, transparency about when you are interacting with a machine, and perhaps even a level of human oversight.
While the AI Act is more about preventing harm than providing compensation, it sets the stage for accountability.
Additionally, the EU has updated its Product Liability Directive to explicitly cover software and AI.

This means if an AI product causes damage, consumers may have an easier path to sue under strict liability; no need to prove the company was negligent, only that the AI was defective and caused harm.
The EU is also debating an AI Liability Directive to simplify how victims can obtain evidence and prove their case in AI related harm situations.
Finally, there is the question of industry self regulation and standards. We might see best practices emerge, for example, standard disclaimers and safety measures, or third party audits of AI systems for reliability.
Already, some AI developers are working on techniques to reduce harmful advice like fine tuning models to refuse certain queries or provide safer answers.
These efforts are not a substitute for law, but they can mitigate risk. And if companies do not voluntarily improve safety, that only increases the likelihood of governments stepping in harder.
Looking forward, perhaps we will get an "AI malpractice" concept or other new legal remedies. For now, though, we live in a world where chatbots come with warning labels and limited accountability.
You can try to sue if you are seriously harmed by terrible AI advice, and cases are starting to test the waters, but success is far from guaranteed.
Consider subscribing to the Tech Law Standard. A paid subscription gets you:
✅ Exclusive updates and commentaries on tech law (AI regulation, data and privacy law, digital assets governance, and cyber law).
✅ Unrestricted access to our entire archive of articles with in-depth analysis of tech law developments across the globe.
✅ Read the latest legal reforms and upcoming regulations about tech law, and how they might impact you in ways you might not have imagined.
✅ Post comments on every publication and join the discussion.
We’re aware that humans can make mistakes and yet corporations can be held liable for employee error. Similar liability should attach when a corporation uses AI as an agent or to carry out tasks formerly handled by a human employee.
Such disclaimers may not always be fully enforceable or effective. Legal frameworks in some jurisdictions (e.g., EU, UK, Switzerland) suggest that AI developers and providers can still be held liable for damages caused by their AI tools, especially if they acted negligently or wilfully, or if the AI system causes harm despite contractual disclaimers. Lliability may extend beyond users to developers and providers, particularly in cases of copyright infringement or product defects. Considering the fact that users are not in the full known extent that when they "accept" the T&C is false representation by the company itself hence in a court of law, the company in the UK or EU is liable to bring to its full extent to any user all the clauses from their T&C. The fsft they do not shifts the blame on them and not the users.