9 Comments
User's avatar
Katherine Argent's avatar

We’re aware that humans can make mistakes and yet corporations can be held liable for employee error. Similar liability should attach when a corporation uses AI as an agent or to carry out tasks formerly handled by a human employee.

Expand full comment
Tech Law Standard's avatar

If a company delegates tasks to AI tools in place of human employees, the company will ultimately be responsible for the outcomes. The law of agency, to some extent, will apply in this case (note: ChatGPT and Gemini are not AI agents). The law on Ai agents is being reformed in the EU; check out one of our posts: https://www.technologylaw.ai/p/european-law-institute-digital-assistants Just as employers are held accountable for the actions of their staff, using AI does not remove that responsibility. The AI tool is part of the company’s operations. If it gives incorrect information or causes harm, the company remains the decision-maker and should bear the consequences unless of course they took "reasonable"steps to prevent harm, but the degree of the harm will be determined based on whether it was "reasonably foreseeable", etc. The law for AI agents is still grey and underdeveloped. Accountability should follow function, not form, regardless of whether a task is done by a person or a bot. The duty to the public still remains the same.

Expand full comment
Digital-Mark's avatar

Such disclaimers may not always be fully enforceable or effective. Legal frameworks in some jurisdictions (e.g., EU, UK, Switzerland) suggest that AI developers and providers can still be held liable for damages caused by their AI tools, especially if they acted negligently or wilfully, or if the AI system causes harm despite contractual disclaimers. Lliability may extend beyond users to developers and providers, particularly in cases of copyright infringement or product defects. Considering the fact that users are not in the full known extent that when they "accept" the T&C is false representation by the company itself hence in a court of law, the company in the UK or EU is liable to bring to its full extent to any user all the clauses from their T&C. The fsft they do not shifts the blame on them and not the users.

Expand full comment
Tech Law Standard's avatar

You are right to note that in jurisdictions like the UK and EU, disclaimers do not offer blanket protection. We indicated that in the post, at the conclusion, and we also shared an excerpt from Google (see Image 5). Google recognises that these "consumer" rights cannot be infringed by their AI services. Also, under the UK Consumer Rights Act, and the EU Product Liability Directive (85/374/EEC), companies can still be liable for harm caused by their defective AI systems. In fact, courts may view failure to clearly communicate terms as misleading, weakening the disclaimers in practice. Nonetheless, the main problem remains enforcement. Courts still interpret liability narrowly, especially if the resulting harm from the use of the AI bot is indirect or hard to quantify. Also, AI providers often operate across jurisdictions which makes it harder to hold them accountable under one country’s justice system. As a result of this lack of harmonization, the protection is lacking and often inconsistent.

Expand full comment
Digital-Mark's avatar

Maybe the US can adopt such legal stance as the UK and the EU. Very good article and indeed legislation needs to be harmonised and uphold above all financial interests.

Expand full comment
Tech Law Standard's avatar

The U.S. could certainly benefit from adopting stronger legal standards just like we can see in the UK and EU. The future of AI regulation is in its harmonisation which will be subject to private international laws.

Actually, the U.S. House of Reps have recently passed a Bill "One Big Beautiful Bill Act" which includes a 10-year ban on U.S. states regulating AI. We are currently working on an article about it and will be publishing the intrinsic legal implications of this Bill in the next 1-2 weeks, and how it will impact AI legal harmonisation.

Expand full comment
Digital-Mark's avatar

They are going to ban AI regulations for 10 years? 😬

Expand full comment
Tech Law Standard's avatar

They want to ban states from passing laws regulating AI. The federal government wants to have the sole power to regulate AI.

Expand full comment
Digital-Mark's avatar

Don't know if it's good or bad but I would imagine how many battles would be fought on D. C corridors of power.

Expand full comment