In February 2024, Air Canada’s chatbot told a grieving passenger he qualified for a bereavement discount that didn’t exist. The passenger booked tickets based on this advice. When the promised refund never came, he sued. Air Canada argued the chatbot was responsible for its own actions. A Canadian tribunal rejected this defense, ruling the airline must honor the nonexistent policy and pay damages.
For marketing teams deploying AI agents, this creates a problem most platforms ignore. You get tools to build agents, then you’re left to figure out liability, compliance, and governance on your own. The Smyth Runtime Environment (SRE) addresses this by embedding accountability into the infrastructure itself, treating governance as a core feature rather than an afterthought added later.
When AI Goes Wrong, You Pay
The Air Canada case wasn’t isolated. McDonald’s shut down its AI drive-thru partnership after viral videos showed the system adding 260 Chicken McNuggets to orders while customers pleaded with it to stop. Google’s Bard chatbot cost Alphabet $100 billion in market value after providing false information during a public demonstration.
Here’s what makes AI liability different from regular software bugs. AI agents operate with meaningful autonomy, making decisions and taking actions with minimal human oversight. They don’t just execute commands. They reason, adapt, and sometimes do things their creators never anticipated.
Because AI agents lack intentions in the way humans do, the law holds them to objective standards and ascribes responsibility to the human beings who deploy them. Stanford Law School researchers explain that under the Uniform Electronic Transactions Act, AI agents can form contracts on behalf of their users, with principal-agent law operating in the background.
Translation: you can’t blame the bot. If your AI agent makes a contract on your behalf, you can’t claim ignorance of what it did.
The Marketing Compliance Minefield
Marketing teams face particularly dangerous exposure. Your AI writes compelling copy, but does it make promises you can’t keep? Forrester Research found that 61% of businesses using AI in marketing faced a compliance-related issue in 2024.
The regulatory environment got significantly more complex in 2025. California’s Privacy Protection Agency approved regulations requiring pre-use notices for automated decision-making technology, with penalties up to $7,500 per intentional violation. Florida’s AI legislation requires businesses to disclose AI-generated content in marketing materials. The Future of Privacy Forum tracked 210 bills across 42 states that could affect private-sector AI development, with 20 bills enacted.
A California court allowed a discrimination case against Workday to proceed, treating the AI vendor as an agent of the employer since the employer delegated traditional hiring functions to algorithmic decision-making tools.
Why Most Platforms Leave You Exposed
Here’s the uncomfortable gap: 47% of organizations have an AI risk management framework, yet 70% lack ongoing monitoring and controls. Most companies deploy AI faster than they build guardrails.
Typical agent frameworks give you the tools to build, then leave compliance as your problem. You’re supposed to manually handle audit logs, access controls, and governance. Organizations using manual compliance processes experience 3.2 times more violations than those with automation.
SmythOS SRE solves this by treating governance as infrastructure. Built-in audit logs capture every critical operation automatically. Role-based access control enforces permissions in real time. Vault-based secret management ensures credentials never get exposed in agent actions.
This matters because AI agents operating 24/7 at scale increase the potential for unintended consequences and make it challenging to detect misalignment with company goals. The SEC and DOJ are now signaling continued focus on how companies describe and govern their AI systems, expecting documented governance and periodic monitoring.
When something goes wrong (and in production systems, things will), you need to know exactly what happened, why it happened, and who had authority. SmythOS SRE delivers those kernel-level guarantees that frameworks simply don’t provide.
The liability question isn’t theoretical. It’s playing out in courtrooms right now. Companies treating AI agents like magic black boxes will learn expensive lessons. But there’s a better path: build governance, monitoring, and clear responsibility chains into your deployment strategy from day one.Start with infrastructure designed for accountability. Star our GitHub repo to see how we’re solving the production backbone problem. Join our Discord to connect with others who are navigating the responsible deployment of AI. Because when your AI makes a mistake, the liability lands on you.
