Introduction
The rapid advancement of artificial intelligence (AI) presents new challenges and opportunities for protecting Indigenous cultural heritage. While cultural appropriation has long been a legal and ethical issue, AI systems exacerbate these concerns by harvesting Indigenous data, replicating traditional designs, and generating cultural expressions without consent. This raises critical questions about intellectual property (IP) protections, ethical AI use, and regulatory oversight.
This article explores the legal frameworks governing Indigenous cultural appropriation, the impact of AI on Indigenous intellectual property rights, and actionable legal strategies for protecting Indigenous traditions in the digital era.
Key Legal Cases
Urban Outfitters v. Navajo Nation (2012)
The legal battle between Urban Outfitters and the Navajo Nation stands as one of the most significant examples of a tribe asserting control over its brand and identity in the face of cultural exploitation.
In Navajo Nation v. Urban Outfitters, Inc., 935 F. Supp. 2d 1147 (D.N.M. 2013), the Navajo Nation sued the retail giant for selling "Navajo"-branded clothing, jewelry, and even flasks, arguing that:
- The products infringed the Nava-jo Nation’s registered trademarks under the Lanham Act, 15 U.S.C. § 1114.
- Urban Outfitters engaged in false advertising and unfair competition under 15 U.S.C. § 1125(a).
- The company violated the Indian Arts and Crafts Act (IACA), 25 U.S.C. § 305e, which prohibits falsely marketing products as Native American-made.
Urban Outfitters claimed that "Navajo" was a generic term, but the court rejected this defense, affirming that Indigenous groups can control how their names and designs are used in commerce. The case ended in a settlement, with Urban Outfitters agreeing to a licensing deal—a bittersweet victory, as legal battles of this magnitude require immense financial resources that many Indigenous communities simply don’t have.
Washington Football Team and the “Redskins” Trademark
Another high-profile dispute over Indigenous representation occurred in Blackhorse v. Pro- Football, Inc., 111 U.S.P.Q.2d 1080 (T.T.A.B. 2014), where the U.S. Patent and Trademark Office (USPTO) canceled six trademarks associated with the Washington Football Team’s former name, "Redskins," on the grounds that they were disparaging under the Lanham Act’s anti-disparagement clause.
However, in Matal v. Tam, 582 U.S. 218 (2017), the Supreme Court struck down the anti-disparagement clause, ruling that it violated the First Amendment. This meant that offensive trademarks could not be denied or canceled on moral grounds alone.
Despite winning in court, the team ultimately changed its name due to overwhelming public and corporate pressure—a reminder that cultural and financial consequences often outweigh legal ones.
The Zia Symbol and Cultural Misappropriation
The Zia symbol, which appears on New Mexico’s state flag, was adopted without the Zia Pueblo’s consent in 1925. Since then, it has been widely commercialized without benefiting the tribe.
In 2014, the Zia Pueblo issued a resolution requesting that businesses seek permission via email before using the symbol. However, enforcement is difficult because trademark law offers limited protection for widely used cultural symbols.
To address such issues, the USPTO maintains a database of Native symbols and tribal insignia to prevent unauthorized trademark applications. Indigenous communities are encouraged to register their symbols in this database for protection.
Artificial Intelligence and Indigenous Cultural Appropriation.
AI doesn’t just copy Indigenous art, it consumes it, training on massive datasets scraped from the internet, including Indigenous symbols, languages, and traditional patterns. These datasets are then used to generate art, music, and even language models that mimic Indigenous cultures, often without credit, consent, or compensation.
One real-world example is Lionbridge, a tech company that recruited Indigenous speakers to help train AI language models, but with unclear agreements on data ownership. Who owns the language once it’s fed into AI? If an AI model generates speech in a Native tongue, does it belong to the AI company, or the people who have spoken it for generations?