chevron-down Created with Sketch Beta.
June 18, 2025 Technology

The legal risks of AI speaking for your business

Does the idea of AI support tools entering a contract or making an enforceable promise or sharing false or misleading information with consumers cause you to think twice as a business owner?

According to Brendan Bernicker, cofounder of Bernicker Law PLLC, a boutique law firm that advises technology startups and software developers in the U.S. and U.K., when it comes to AI speaking on behalf of businesses, these are things, among others, that business owners should be thinking about or  be “a little worried about” because they all could potentially create legal liability for the company.

Bernicker, who spoke at the ABA-sponsored webinar “We Said What?! Companies' Liability for Statements by (and to) Their AI Customer Service Agents,” said businesses that use support tools like chatbot or other AI tools that interact with customers should learn how to weigh the legal consequences of “allowing AI to interact with your customers.” There are risks, he said.

Bernicker said it’s good to look at cases from other countries as well because “AI is inherently international. In the U.S. we have a traditional allergy to looking to the judgements of foreign courts, but it would be a real disservice in the AI area not to try to benefit from these sorts of reasoned judgements from legal colleagues in other countries.”

Moffat v. Air Canada was a 2024 case that was decided by the British Columbia Civil Resolution Tribunal. It found Air Canada liable for the negligent misrepresentation of its AI chatbot. The plaintiff Moffat’s grandmother passed away, and he asked the chatbot on Air Canada’s site about their bereavement policies and was told he could book his flight and ask for a refund if he did it within 90 days and he would get a bereavement discount afterwards.

In that response from the chatbot, there was a link to the actual policy with Air Canada that said the bereavement discount had to be approved in advance. So when he traveled and asked for a refund, he was denied. He sued in a small claims court. “That sort of teed up the first real decision on the question about when businesses are liable for statements by their AI agents,” Bernicker said.  The holding was that Air Canada was liable, and therefore, the airline had to honor the discount.

Bernicker said Air Canada could have set the AI model up differently so that it didn’t provide information that it wasn’t prepared to honor. It could have just linked to the policy and not commented on it. “You could just do retrieval, or you could just share the results, you don’t have to let the model write some content,” he said.

Bernicker, who teaches a course on AI law at Penn State University, said he often gets the question, “When did Congress or my state make an AI law?” Most of what he covers in the course and advises clients on is how existing law applies to new technologies.  “This is not an area where it’s going to be lots of more rules for liability for chatbots. Traditional rules serve pretty well, the doctrine of apparent authority.”

Bernicker discussed hypotheticals and various scenarios involving chatbots acting on behalf of the principal. For instance, he said, AI could cause challenges when businesses receive notices, such as copyright or cease and desist notices.

When representing businesses in disputes, oftentimes the issue in the case could revolve around “what the business knew and when they knew it,” Bernicker said.

Even though AI will be considered an agent for the business in most cases, Bernicker said businesses could limit the AI tools’ apparent authority, through prominent, clear and tailored disclaimers. He said businesses should have “effective disclaimers and that having records of what your customers interactions are with your platforms are the number one and number two best ways to have your clients limit their liability.”

Other topics focused on data privacy, third-party vendors, AI polices and terms of service.

Related links: