chevron-down Created with Sketch Beta.

Business Law Today

September 2024

The Price of Emotion: Privacy, Manipulation, and Bias in Emotional AI

Lena Kempe

Summary

  • Emotional AI, a subset of AI that measures and reacts to human emotions, is rapidly expanding in use, with significant data privacy, manipulation, and bias risks.
  • Misuse of Emotional AI may lead to legal issues under US and EU laws, resulting in potential government fines, investigations, and class action lawsuits.
  • This article explores relevant legal frameworks for each of Emotional AI’s key areas of risk and highlights potential strategies for risk mitigation and compliance.
The Price of Emotion: Privacy, Manipulation, and Bias in Emotional AI
iStock.com/Tero Vesalainen

Jump to:

Imagine shopping for Christmas gifts online without knowing that AI is tracking your facial expressions and eye movements in real time and guiding you towards more expensive items by prioritizing the display of similar high-priced items. Now picture a job candidate whose quiet demeanor is misinterpreted by an AI recruiter, resulting in the denial of his dream job. Emotional AI, a subset of an AI that “measures, understands, simulates, and reacts to human emotions,” is rapidly spreading. Used by at least 25 percent of Fortune 500 companies as of 2019, with the market size projected to reach $13.8 billion by 2032, this technology is turning our emotions into data points.

This article examines the data privacy, manipulation, and bias risks of Emotional AI, analyzes relevant United States (“US”) and European Union (“EU”) legal frameworks, and proposes compliance strategies for companies.

Emotional AI, if not operated and supervised properly, can cause severe harm to individuals and subject companies to substantial legal risks. It collects and processes highly sensitive personal data related to an individual’s intimate emotions and has the potential to manipulate and influence consumer decision-making processes. Additionally, Emotional AI may introduce or perpetuate bias. Consequently, the misuse of Emotional AI may result in violations of applicable EU or US laws, exposing companies to potential government fines, investigations, and class action lawsuits.

1. Emotional AI Defined

Emotional AI techniques can include analyzing vocal intonations to recognize stress or anger and processing facial images to capture subtle micro-expressions. As this technology develops, it has the potential to revolutionize how we interact with technology by introducing more relatable and emotionally responsive ways of doing so. Already, Emotional AI personalizes experiences across different industries. Call center agents tune into customer emotions, instructors personalize learning, healthcare chatbots offer support, and ads are edited for emotional impact. AI in trucking detects drowsiness for driver safety, while in games, it personalizes experiences.

2. Data Privacy Concerns

Emotional AI relies on vast amounts of personal data to infer emotions (output data), raising privacy concerns. It may use the following input data:

  1. Textual data: social media posts and emojis.
  2. Visual data: images and videos, including facial expressions, body language, and eye movements.
  3. Audio data: voice recordings, including tone, pitch, and pace.
  4. Physiological data: biometric data (e.g., heart rate) and brain activity via wearables.
  5. Behavioral data: gestures and body movements.

With emotions being one of the most intimate aspects of a person’s life, people are naturally more worried about the privacy of data revealing their emotions than other kinds of personal data. Imagine a loan officer using AI-based emotional analysis to collect and analyze loan applicants’ gestures and voices at interviews. Applicants may be concerned about how their data will be used, how they can control such uses, and the potential consequences of a data breach.

A. Legal Framework

The input and output data of Emotional AI (“Emotional Data”), if directly identifiable, relating to, or reasonably linked to an individual, fall under the broad definition of “Personal Data” and are thus protected under various US state data privacy laws and the European Union’s General Data Protection Regulation (“GDPR”), which serves as the baseline for data privacy laws in EU countries. For example, gestures and body movements, voice recordings, and physiological responses—all of which can be processed by Emotional AI—can be directly linked to specific individuals and therefore constitute Personal Data. Comprehensive data privacy laws in many jurisdictions require the disclosure of data collection, processing, sharing, and storage practices to consumers. They grant consumers the rights to access, correct, and delete Personal Data; require security measures to protect Personal Data from unauthorized access, use, and disclosure; and stipulate that data controllers may only collect and process Personal Data for specified and legitimate purposes. Additionally, some laws require minimizing the Personal Data used, limiting the duration of data storage, and reducing Personal Data to nothing beyond what is necessary to achieve the stated purposes of processing.

Furthermore, if the Personal Data have the potential to reveal certain characteristics such as race or ethnicity, political opinions, religious or philosophical beliefs, genetic data, biometric data (for identification purposes), health data, or sex life and sexual orientation, they will be considered sensitive Personal Data (“SPD”). For instance, Emotional AI systems that analyze voice tone, word choice, or physiological signals to infer emotional states could potentially reveal information about an individual’s political opinions, mental health status, or religious beliefs—which is SPD—such as by analyzing a person’s speech patterns and stress levels during discussions on certain topics. Both the GDPR and several US state privacy laws provide strong protections for SPD. The GDPR requires organizations to obtain a data subject’s explicit consent to process SPD with certain exceptions. It also mandates a data protection impact assessment when automated decision-making with profiling significantly impacts individuals or involves processing large amounts of sensitive data. Similarly, several US state laws require a controller to perform a data protection assessment and obtain valid opt-in consent. California grants consumers the right to limit the use and disclosure of their SPD to what is necessary to deliver the services or goods. The processing of SPD may also be subject to other laws, such as laws on genetic data, biometric data, and personal health data. Depending on the context where Emotional AI is utilized, certain sector-specific privacy laws may apply, such as the Gramm-Leach Bliley Act (“GLBA”) for financial information, the Health Insurance Portability and Accountability Act (“HIPAA”) for health information, and the Children’s Online Privacy Protection Act (“COPPA”) for children’s information.

Emotional AI relies heavily on biometric data, such as facial expressions, voice tones, and heart rate. One of the most comprehensive and most litigated biometric privacy laws is Illinois’s Biometric Information Privacy Act (“BIPA”). Under the BIPA, “Biometric information” includes any information based on biometric identifiers that identify a specific person. “Biometric identifiers” include “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” The BIPA imposes the following key requirements on private entities that collect, use, and store Illinois residents’ biometric identifiers and information:

  1. Develop and make accessible to the public a written policy that outlines the schedules for retaining biometric data and procedures for its permanent destruction.
  2. Safeguard biometric data with a level of care that meets industry standards or is equivalent to the protection afforded to other sensitive data.
  3. Inform individuals about the specific purposes for which their biometric data is being collected, stored, or used, and the duration for which it will be retained.
  4. Secure informed written consent from individuals before collecting or disclosing biometric data.

The adoption of biometric privacy laws is a growing trend across the country. Several states and cities, including Texas, Washington, New York City, and Portland, have also passed biometric privacy laws.

Current data privacy laws help address the data privacy concerns related to Emotional AI. However, Emotional AI presents unique challenges in complying with data minimization requirements. AI systems often rely on collecting and analyzing extensive datasets to draw accurate conclusions. For example, Emotional AI might use heart rate to assess emotions. However, a person’s heart rate can be influenced by factors beyond emotions, such as room temperature or physical exertion. Data minimization mandates collecting only relevant physiological data, but AI systems might need to capture a wide range of data to account for potential external influences and improve the accuracy of emotional state inferences. This creates a situation where data beyond the core emotional state indicators is collected and what data is necessary may be contentious.

In addition, Emotional AI development may encounter difficulties in defining the intended purposes for data processing due to the inherently unpredictable nature of algorithmic learning and subsequent data utilization. In other words, the AI might discover unforeseen connections within a dataset, potentially leading to its use for purposes that were not defined and conveyed to consumers. For example, a customer service application could use Emotional AI to analyze customer voices during calls to identify frustrated or angry customers for priority handling. Over time, the AI could identify a correlation between specific speech patterns and a higher likelihood of customers canceling the service, a purpose not included in the privacy policy.

B. Legal Strategies

To effectively comply with the complex array of data privacy laws and overcome the unique challenges presented by Emotional AI, organizations developing and using Emotional AI should consider adopting the following key strategies:

  1. Develop a comprehensive privacy notice that clearly outlines the types of Emotional Data collected, the purposes for processing that data, how the data will be processed, and the duration for which the data will be stored.
  2. To address data minimization concerns, plan in advance the scope of Emotional Data necessary for and relevant to developing a successful Emotional AI, adopt anonymization or aggregation techniques whenever possible to remove personal data components, and enforce appropriate data retention policies and schedules.
  3. To tackle the issue of purpose specification, regularly review data practices to assess whether Emotional Data in AI is used for the same or compatible purposes as stated in relevant privacy notices. If the new processing is incompatible with the original purpose, update the privacy notices to reflect the new processing purpose, and de-identify the Emotional Data, obtain new consent, or identify another legal basis for the processing.
  4. If the Emotional Data collected can be considered sensitive Personal Data, implement an opt-in consent mechanism and conduct a privacy risk assessment.
  5. Implement robust data security measures to protect Emotional Data from unauthorized access, use, disclosure, or alteration.

3. Risks of Emotion Manipulation

Emotional AI carries significant risks of being used for manipulation. In three experiments, AI has been shown to learn from participants’ responses to identify weaknesses used in decision-making and guide them toward desired actions. Imagine an online social media platform using Emotional AI to detect and strengthen gamblers’ addictions to promote ads for its casino clients.

A. Legal Framework

I. EU Law

The EU recently enacted the Artificial Intelligence Act (the “EU AI Act”), addressing emotional AI abuse by prohibiting two key categories of AI systems:

  1. AI systems that use subliminal methods or manipulative tactics to significantly alter behavior, hindering informed choices and causing or likely causing significant harm.
  2. Emotion recognition AI in educational and workplace settings except for healthcare or safety needs.

If an emotional AI system is not prohibited under the EU AI Act, such as when it does not cause significant harm, it is deemed a “high-risk AI system,” subjecting its providers and deployers to various requirements, including:

  1. Providers must ensure transparency for deployers by providing clear information about the AI system, including its capabilities, limitations, and intended use cases. They must also implement data governance, promptly address any violation of the EU AI Act and notify relevant parties, implement risk and quality management systems, perform conformity assessments to demonstrate that the AI system meets the requirements of the EU AI Act, and establish human oversight mechanisms.
  2. Deployers must inform consumers of significant decisions, conduct impact assessments, report incidents, ensure human oversight, maintain data quality, and monitor systems.

II. US Law

There is no specific US law that addresses Emotional AI. However, section 5 of the Federal Trade Commission (“FTC”) Act prohibits unfair or deceptive acts or practices. FTC attorney Michael Atleson stated in a 2023 consumer alert that the agency is targeting deceptive practices in AI tools, particularly chatbots designed to manipulate users’ beliefs and emotions. Within the FTC’s focus on AI tools, one concern is the possibility of companies’ exploiting “automation bias,” where people tend to trust AI outputs perceived as neutral or impartial. Another area of concern is anthropomorphism, where individuals may find themselves trusting chatbots more when such bots are designed to use personal pronouns and emojis or otherwise provide more of a semblance of a human person. The FTC is particularly vigilant about AI steering people unfairly or deceptively into harmful decisions in critical areas such as finance, health, education, housing, and employment. It assesses whether AI-driven practices might mislead consumers into actions contrary to their intended goals and thus constitute deceptive or unfair behavior under the FTC Act. Importantly, these practices can be deemed unlawful even if not all consumers are harmed or if the affected group does not fall under protected classes in antidiscrimination laws. Companies must ensure transparency about the use of AI for targeted ads or commercial purposes and inform users if they are interacting with a machine or whether commercial interests are influencing AI responses. The FTC warns against cutting AI ethics staff and emphasizes the importance of risk assessment, staff training, and ongoing monitoring.

B. Legal Strategies

To avoid regulatory scrutiny and potential claims of emotional manipulation, companies developing or deploying Emotional AI should consider taking the following measures:

  1. Ensure transparency by clearly informing users when they are interacting with an Emotional AI and explaining in a privacy policy how the AI analyzes user data to infer emotion and how output data is used, including any potential commercial influences on AI responses.
  2. Refrain from using subliminal messaging or manipulative tactics to influence user behavior. Conduct ongoing monitoring and periodic risk assessments to identify and address emotional manipulation risks.
  3. If operating in the EU, evaluate the Emotional AI’s potential for causing significant harm and determine if it falls under the “prohibited” or “high-risk” category. For high-risk AI systems, comply with the applicable obligations under the EU AI Act.
  4. Train staff on best practices for developing and deploying Emotional AI.

4. Risks of AI Bias

Emotional AI may have biased results, particularly if the training data lacks diversity. For instance, a system trained on images of people of only one ethnicity may not recognize facial expressions of another ethnicity, and cultural differences in gestures and vocal expressions may be misinterpreted by an AI system without diverse training data. An example of the potential impact of such bias would be an Emotional AI trained on mental health patients from only one ethnic group that may misinterpret emotions and thereby overlook important symptoms in other groups, resulting in misdiagnosis.

A. Legal Framework

I. EU Law

The EU AI Act addresses bias by imposing stringent requirements on high-risk AI providers and deployers, with a particular emphasis on the provider’s obligation to implement data governance to detect and reduce biases in datasets. The GDPR provides an additional layer of protection against AI bias. Under the GDPR, decision-making based solely on automated processing (including profiling), such as AI, is prohibited unless necessary for a contract, authorized by law, or done with explicit consent. Data subjects affected by such decisions have the right to receive clear communication regarding the decision, seek human intervention, express their viewpoint, comprehend the rationale behind the decision, and contest it if necessary. Data controllers are required to adopt measures to ensure fairness, such as using statistical or mathematical methods that avoid discrimination during profiling, implementing technical and organizational measures to correct inaccuracies in personal data and minimize errors, and employing methods to prevent discrimination based on SPD. Automated decision-making and profiling based on SPD are only permissible if the data controller has a legal basis to do so under the GDPR.

II. US Law

There is no specific federal law addressing AI bias in the US. However, existing antidiscrimination laws apply to AI. Notably, the FTC has taken action related to AI bias under the unfairness prong of Section 5 of the FTC Act. In December 2023, the FTC settled a lawsuit with Rite Aid over the alleged discriminatory use of facial recognition technology, setting a new standard for algorithmic fairness programs. This standard includes consumer notification and contesting options, as well as rigorous bias testing and risk assessment protocols for algorithms. This case also establishes a precedent for other regulators with fairness authority, such as insurance commissioners, state attorneys general, and the Consumer Financial Protection Bureau, to use such authority for enforcement against AI bias.

On the state level, in May, Colorado enacted the Artificial Intelligence Act, the first comprehensive state law targeting AI discrimination, which applies to developers and deployers of high-risk AI systems doing business in Colorado. This may extend to out-of-state businesses serving consumers in Colorado. Emotional AI that significantly influences decisions with material effects in areas such as employment, finance, healthcare, and insurance is considered high-risk AI under the Act. Developers of such systems are required to provide a statement on the system’s uses; summaries of training data; information on the system’s purpose, benefits, and limitations; documentation describing evaluation, data governance, and risk mitigation measures, as well as intended outputs; and usage guidelines. Developers must also publicly disclose types of high-risk AI systems they have developed or modified and risk management approaches, and they must report potential discrimination issues to the attorney general and deployers within ninety days. Deployers must inform consumers of significant decisions, summarize deployed systems and discrimination risk management on their websites, explain negative decisions with correction or appeal options, conduct impact assessments, report instances of discrimination to authorities, and develop a risk management program based on established frameworks.

In addition, most state data privacy laws stipulate that a data controller shall not process personal data in violation of state or federal laws that prohibit unlawful discrimination against consumers. The use of Emotional AI in the employment context also subjects companies to various federal and state laws.

B. Legal Strategies

To comply with antidiscrimination laws and address bias risks of Emotional AI, companies developing or deploying Emotional AI should consider adopting the following strategies:

  1. Establish a robust data governance program to ensure diversity and quality of training data for Emotional AI systems, including regularly monitoring and auditing the training data.
  2. Develop a risk management program based on established risk frameworks, such as the AI Risk Management Framework released by the National Institute of Standards and Technology.
  3. Conduct routine AI risk assessments and bias testing to identify and mitigate potential biases in Emotional AI systems, particularly those used in high-risk areas such as employment, finance, healthcare, and insurance.
  4. Publicly disclose details about Emotional AI systems on the company website, including data practices, types of systems developed or deployed, and risk management approaches.
  5. Inform consumers of significant decisions made by Emotional AI systems. Establish mechanisms to allow consumers to contest decisions and appeal unfavorable outcomes, notify consumers of their rights, and provide clear explanations for decisions made by Emotional AI systems.
  6. In employment contexts, comply with federal and state laws, Equal Employment Opportunity Commission guidance, and Colorado’s and the EU’s AI Acts.

5. Conclusion

The rapid growth of Emotional AI presents a complex challenge to legislators. The EU’s strict regulations on AI and data privacy more effectively safeguard consumers’ interests. However, will this approach hinder AI innovation? Conversely, the reliance of the United States on a patchwork of state and sector laws, along with federal government agencies’ guidance and enforcement, creates more room for AI development. Will this strategy leave consumer protections weak and impose burdensome compliance requirements? Should the United States consider federal legislation that balances innovation with consumer protections? This is an important conversation. In the meantime, companies must continue to pay close attention to Emotional AI’s legal risks across a varied legal landscape.

    Author