The Marketing Concern: Truth in Advertising Laws
For companies planning to market their AI skills and tools, one immediate consideration isn’t unique to AI: It’s those darn truth in advertising laws. In February the FTC reminded businesses that laws around product claims apply to today’s advanced tech just as they do to traditional goods and services. If an advertiser claims that its product is “AI-enabled,” then it needs to be; merely using AI in the product development process won’t cut it. Similarly, advertisers must substantiate performance claims and comparative claims about automated technologies, and the claims must accurately reflect the technologies’ limitations. The FTC is coupling this guidance with action. In August the agency sued Automators AI for falsely and misleadingly promising consumers financial gains from using AI.
Companies selling assets made by AI also need to consider whether the AI-generated nature of their content warrants additional disclosures. The FTC has been clear that deceptively peddling AI-generated lookalikes and sound-alikes as the work of real artists or musicians violates the law. At the state level, the California Bolstering Online Transparency Act (BOT) similarly prohibits using a bot to deceive people in a sales- or election-related context. And the FTC has even suggested that companies offering generative AI products may need to disclose to what extent their training data includes copyrighted or protected materials.
Despite the FTC’s flurry of new guidance on the topic, the agency acknowledges that “artificial intelligence” remains a nebulous phrase. And the varied and vague meanings of the term may still allow many companies to claim—lawfully—that they’re indeed AI-driven.
The Bigger Worry: A Growing Web of Laws and Regulations
With guidance from their lawyers, companies should be able to ensure that their marketing claims about AI are truthful and evidence based. But false advertising laws shouldn’t be the only considerations for organizations deciding whether to tread into the new territory. Businesses rushing toward AI’s gold-laden hills should realize that lawmakers, regulators, and others are heading to the same place.
For these other parties, the definitional fuzziness of the term “AI” isn’t a marketing opportunity; it’s a broad target. Take federal agencies, for example. In April, weeks after Bill Gates proclaimed that “[t]he Age of AI has begun,” the FTC and three other agencies—the Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, and the Equal Employment Opportunity Commission—jointly pledged to “vigorously use our collective authorities” to monitor the emerging tech. The four agencies see their authority as extending over not just the already-expansive category of AI but across all “automated systems”—a term they use “broadly” to encompass any “software and algorithmic processes . . . used to automate workflows and help people complete tasks or make decisions.” Bottom line: If you’re doing anything involving AI or adjacent to AI, you’re on these regulators’ radar.
President Biden’s October Executive Order (EO) on artificial intelligence will only heighten the regulatory buzzing. The lengthy EO directs several agencies to propose regulations and provide guidance on AI. For instance, the Secretary of Commerce must issue guidelines and best practices for developing “safe, secure, and trustworthy AI systems”—guidance that will likely affect how private industry designs and deploys AI.
Legislators, too, have big plans for the tech. Indeed, some laws regulating AI and similar technology are already on the books. For instance, the California Privacy Rights Act of 2020 (CPRA) charges the California Privacy Protection Agency with issuing regulations on how businesses employ automated decision-making technology. The California Age-Appropriate Design Code Act, passed in 2022, likewise limits how businesses use algorithms in services and products likely to be accessed by minors. East of the Golden State, Colorado adopted a regulation effective in November that seeks to prevent life insurers from using algorithms and predictive models to racially discriminate. And across the Atlantic, the European Union’s General Data Protection Regulation (GDPR) has for years let individuals opt out of certain automated decision-making.
Other legislative changes are coming. On Capitol Hill, a bipartisan group of lawmakers is reportedly developing a “sweeping framework” to regulate AI, including licensing and auditing requirements, rules around data safety and transparency, and liability provisions. State legislatures are a step ahead of Congress, with several AI-focused laws proposed or already passed. And the EU has drafted what it calls “the world’s first rules on AI.” Its AI Act would impose comprehensive requirements—involving security, training, data governance, and transparency—on any company using, developing, or marketing “AI systems” in the EU. And while the European Parliament acknowledges industry groups’ concerns that the term “AI systems” is too wide-reaching, the EU still intends to adopt a “broad definition” to cover both current and future developments in AI technology.
In response to these changes, a new of category of “AI governance” professionals has emerged to shepherd companies through the upcoming law-making and regulation. Such roles may soon become must-haves for companies looking to commercialize AI.
Whatever one’s views on the need for state intervention into this new technology—and expectations for how effective it will be—this governmental pencil-sharpening should make any prudent company think a moment before slapping an “AI” sign on its storefront. Just as a complex and ever-growing regulatory regime governs privacy and personal data, a thicket of statutes and regulations has been sprouting around AI and anything like it. Stepping into that thicket shouldn’t be a hurried marketing move—it needs to be a calculated decision.
To Brand as “AI” or Not to Brand as “AI”
Many commentators foresee AI as the next internet or mobile phone—a revolutionary technology destined to be the bedrock of any modern business. And it may well be. Yet those predictions are made in a temporary Eden of minimal regulation. They fail to consider the horde of governmental actors poised to shake up the landscape.
For some companies, the costs of operating within a complex regulatory regime—and the risks of noncompliance—may outweigh any potential benefits from repositioning their business around AI. These companies may deliberately choose not to get into the AI game.
With the AI frenzy still in full effect, that may sound far-fetched. But it shouldn’t. Regulatory avoidance—that is, structuring one’s business to lawfully avoid laws and regulations—is common today. Many organizations choose not to operate in certain jurisdictions, for instance, or decline to process sensitive data like biometrics or minors’ personal information, so that they can limit their legal obligations and business exposures. While that may mean some lost commercial opportunities, the organizations don’t see the value as justifying the additional regulatory burden.
Conclusion
The AI wave may be inevitable, with every business forced to swim along or else sink. But companies are naive if they think the “AI” tag attracts nothing but customers—governmental bodies around the world see it, too. Before rebranding, then, smart organizations should consider not just the immediate marketing benefits of being “powered by AI” but also its long-term costs and risks.