chevron-down Created with Sketch Beta.

ARTICLE

Communications Law Fall 2024 Report

Joseph E Cosgrove Jr, Christian F Binnig, Daniel R Conway, Andrew Cooper Emerson, and Joe Tocco

Summary

  • President Biden issued an Executive Order focused on advancing safe and trustworthy artificial intelligence development.
  • The European Union and the United States have entered a new transatlantic data privacy framework to replace the invalidated Privacy Shield.
  • In the past couple of years, various courts have heard cases that challenge the scope of Section 230 of the Communications Decency Act.
Communications Law Fall 2024 Report
FactoryTh via Getty Images

Introduction

The rapid development of technology and services in the arena of communications continues to be staggering. This environment presents unique legal challenges to lawyers, legislators, regulators, and judges. While some issues still linger (e.g., net neutrality), there are fresh new puzzles to solve such as artificial intelligence. In addition, approaches to issues around the world now impact decision-making of policy makers and industry participants. The landscape can and does change in just a few months’ time, making it challenging for practitioners to keep pace. Hopefully, this report provides some assistance.

A. Executive Branch and Regulatory Developments

1. Net Neutrality (Again) in the U.S.

The Federal Communications Commission (FCC) under the Biden administration recently began taking steps to reinstate net neutrality rules that were last repealed in 2017. In May 2024 the FCC issued the “Open Internet Order” by a three to two “party-line” vote. The FCC reclassified broadband Internet access service (BIAS) as a telecommunications service under Title II of the Communications Act of 1934. This is not the first time the FCC has classified BIAS as a telecom service. The FCC wrote that:

Since the Commission’s abdication of authority over broadband in 2017, there has been no effectual federal oversight over this vital service. Our classification decision today reestablishes the Commission’s authority to protect consumers and resolves the pending challenges to the Commission’s faulty 2017 classification decision.

If this latest approach survives the current court challenges (which we discuss below), net neutrality rules would once again prohibit internet service providers (ISPs) from blocking, throttling, or prioritizing certain content or services, impacting how ISPs manage and price their services. The recent judicial developments on this matter that we discuss below, however, may impact this order.

2. AI Regulation – Executive Branch

In July 2023, large AI players entered into a “safeguards” voluntary agreement with the Biden-Harris Administration. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI all agreed to a set of eight “rules” (voluntary, not legally enforceable) pending legislation.

In late 2023, President Biden issued an Executive Order on AI aimed at fostering safe and trustworthy AI development. This order is one of the most significant governmental actions taken on AI in the United States to date, reflecting growing concerns about the rapid development and potential risks associated with AI. Key elements of the Executive Order include:

  • Safety Standards and Testing: The order mandates that AI systems, especially those used in critical areas like national security, healthcare, and finance, must undergo rigorous testing to ensure they are safe, secure, and free from biases. Agencies like the National Institute of Standards and Technology (NIST) are tasked with developing these standards.
  • Transparency and Accountability: The order requires AI developers to be more transparent about how their systems operate and the data they use. Companies will need to provide detailed reports on AI systems that are high-risk, ensuring they can be audited for compliance with safety and ethical standards.
  • Privacy Protections: The order emphasizes the protection of individual privacy in AI systems. It mandates the development of privacy-enhancing technologies and practices, such as differential privacy, to minimize the collection and use of personal data.
  • Workforce Impact and Equity: The order addresses concerns about the impact of AI on the workforce, calling for studies and initiatives to ensure that AI adoption does not exacerbate inequality. It also promotes education and training programs to prepare the workforce for changes brought about by AI.
  • International Cooperation: Recognizing the global nature of AI development, the order calls for international cooperation to establish common standards and practices. This includes working with allies to prevent the use of AI in ways that could threaten global security.
  • Ethics and Civil Rights: The order underscores the need for AI to be developed and used in ways that respect civil rights and ethical principles. This includes preventing the use of AI in ways that could lead to discrimination or violations of civil liberties.
  • Government Use of AI: The order also addresses how federal agencies use AI, requiring them to adopt AI technologies responsibly and in ways that align with the public good. Agencies are directed to create policies that ensure AI is used to improve services without compromising public trust.

This executive order represents a significant step toward regulating AI at a federal level in the United States. It acknowledges both the opportunities and risks presented by AI and seeks to establish a framework that ensures AI is developed and deployed in ways that are safe, ethical, and beneficial to society. The order's emphasis on international cooperation also highlights the need for a coordinated global approach to AI governance.

In addition, federal agencies reported that they completed all of the 270-day actions in the Executive Order on schedule, following their on-time completion of every other task required to date. On July 26, 2024, the Biden-Harris Administration announced that Apple signed onto the voluntary commitments.

3. Robocalls and AI

In August 2024, the FCC adopted a notice of proposed rulemaking (NPRM) regarding AI-generated robocalls and robotexts. The NPRM is seeking comment on:

  • the definition of AI-generated calls;
  • requiring callers to disclose their use of AI-generated calls;
  • supporting technologies that alert and protect consumers from unwanted and illegal AI robocalls; and
  • protecting positive uses of AI to improve access to the telephone network for people with disabilities.

4. AI Regulation – Local and State

As discussed in this report, there has been substantial movement towards regulating AI technologies. Although the U.S. is still developing its approach, significant steps have been taken at local, state and federal levels. New York City, for instance, has implemented AI audit laws focused on employment decisions.

5. Lapse of Spectrum Auction Authority

In 2021, FCC’s auction of C-Band spectrum for 5G raised $80.9 billion, the highest grossing spectrum auction and nearly double the prior record, according to the FCC. But as we noted in our prior reports, in 2023, Congress failed to renew the FCC’s longstanding authority to conduct spectrum auctions. No real progress has been made to date on this important issue which is especially critical as the need for spectrum increases daily.

6. AI Regulation – Europe

Despite the actions taken by the Biden Administration, Europe leap-frogged the United States in AI regulation (as they had previously done in the area of privacy). The EU’s historic legislation, known as the EU Artificial Intelligence Act (AI Act), passed by a vote of 523-46 in March 2024, creating the world’s first comprehensive framework for AI regulation. The European bloc took a risk-based approach to AI governance, strictly prohibiting AI practices that are considered unacceptable, with some AI systems classified as high-risk, while encouraging responsible innovation. The AI Act generally provides the following:

  • A broad definition of AI that applies to many different entities, including providers, deployers, importers and distributors of AI systems, and will have a wide extra-territorial scope;
  • A cross-sectoral, risk-based classification system with an outright prohibition on certain AI practices deemed to impose unacceptable risk;
  • New obligations largely targeting AI systems deemed “high-risk” and obligations on providers of general-purpose AI systems, including generative AI systems like ChatGPT; and
  • Significant fines similar to the GDPR of up to 7% of annual global turnover for certain offenses.

The AI Act was formally published on July 12, 2024, in the Official Journal of the European Union and took effect on August 1. The AI Act will be phased in segments between six and 36 months.

7. EU Digital Markets Act (DMA) and Digital Services Act (DSA)

The European Union has implemented two major regulations aimed at controlling the power of Big Tech companies— the DMA, which targets anti-competitive practices by large digital platforms, and the DSA, which focuses on content moderation, transparency, and accountability in digital services. These laws mark a significant shift in how digital markets are regulated, setting precedents for other regions and potentially leading to stricter regulations on companies like Google, Apple, and Meta.

8. Cybersecurity Regulation and Incident Reporting

In response to rising cyber threats, several countries, including the U.S., implemented or proposed new cybersecurity regulations requiring critical infrastructure and telecommunications companies to report cyber incidents more swiftly and to strengthen their defenses. These regulations aim to improve national security by ensuring faster response times to cyberattacks and increasing the resilience of critical infrastructure against digital threats.

New York State updated its cybersecurity regulations, expanding coverage to include more entities and requiring more frequent risk assessments. These regulations are part of a broader trend of increasing cybersecurity mandates at both state and federal levels, reflecting growing concerns over data breaches and ransomware attacks.

9. EU-U.S. Data Privacy Framework

The European Union and the United States have entered into a new transatlantic data privacy framework to replace the invalidated Privacy Shield. This agreement aims to ensure that data flows between the two regions comply with EU privacy laws. President Biden has stated that this framework is crucial for businesses engaged in transatlantic data transfers, as it provides a legal basis for the transfer of personal data from the EU to the U.S., impacting how companies handle data privacy and protection.

10. Digital Discrimination

The FCC created a task force and later introduced rules under the Infrastructure Investment and Jobs Act (IIJA)aimed at reducing "digital discrimination" in broadband access. These rules are intended to ensure equal access to high-speed internet, especially in underserved communities, by prohibiting practices that unfairly limit broadband availability. Among other actions, the FCC stated:

We adopt the Communications Equity and Diversity Council’s recommendations that propose model policies and practices for states and localities to address digital discrimination of access. We emphasize that these model policies and practices do not foreclose adoption by states and localities of additional measures to ensure equal access to broadband service in their communities.

The rules are currently being challenged in the Eighth Circuit in Minnesota Telecomms. Alliance v. FCC. On appeal numerous industry groups/associations argue that the FCC's digital order is a significant expansion of its jurisdiction to include industries previously untouched by FCC regulations. They argue that the FCC’s rulemaking authority should be restricted to telecommunications and ISPs. Large ISPs had previously filed challenges to the rules as going beyond the FCC’s authority under the IIJA and should be limited to intentional discrimination standards. The FCC argues that the rules are consistent with the intent of Congress to ensure equal access to broadband and that the rules are reasonable and administrable. Oral argument is scheduled for late September 2024.

11. Regulatory and Universal Service Fees

The FCC recently adopted its annual regulatory fees order which requires Cable TV, satellite TV, broadcasters, and phone companies to pay fees to cover the FCC’s $390M budget which is set by Congress. However, despite objections from broadcast associations, Internet Service Providers (ISPs) are exempt from paying the fees.

The fees increased compared to the previous fiscal year. There were reductions in fees for TV and radio stations. This adjustment follows the reallocation of employee costs across regulated industries. The FCC's rationale for not applying regulatory fees to ISPs is based upon statutory interpretation. The FCC takes the position the fees must be based on entities that utilize or benefit from FCC regulatory activities and the FCC does not actively regulate ISP rates or terms of service in the same way it does for broadcasters, satellite operators, and other sectors and ISPs do not directly utilize spectrum resources.

On September 11, the FCC announced that the fee on monthly telecommunications charges will rise to 35.8 percent in the fourth quarter, up from 34.4 percent in the current quarter.

12. FTC Report and Social Media and Privacy

The Federal Trade Commission (FTC) recently released a staff report examining the data collection and use practices of major social media and video streaming services. The report purports to show these providers have conducted vast surveillance of consumers in order to monetize their personal information but have failed to adequately protect users online, especially children and teens.

The report makes recommendations to policymakers and companies, including:

  • Congress should pass comprehensive federal privacy legislation to limit surveillance, address baseline protections, and grant consumers data rights;
  • Companies should limit data collection, implement concrete and enforceable data minimization and retention policies, limit data sharing with third parties and affiliates, delete consumer data when it is no longer needed, and adopt consumer-friendly privacy policies that are clear, simple, and easily understood; and.
  • Companies should not collect sensitive information through privacy-invasive ad tracking technologies.

B. Judicial Developments

1. Net Neutrality

In recent developments regarding net neutrality, the Sixth Circuit Court of Appeals issued a significant ruling on August 1, 2024, in In re: MCP No. 185 that has major implications for the FCC's attempts to reinstate net neutrality rules. In that ruling the court stayed the FCC's recently adopted Open Internet Order, which aimed to reclassify broadband internet as a telecommunications service under Title II of the Communications Act, and to impose strict net neutrality regulations on ISPs.

The Sixth Circuit's decision signals that the court believes the petitioners, mainly broadband providers, are likely to succeed on the merits of their challenge. The court observed:

The history of the relevant statutory terms – “information service” and “telecommunications service” -- shows that the Act likely classifies broadband as an information service. When Congress enacted the Telecommunications Act, it enshrined the Commission’s prior dichotomy between basic and enhanced services within its new definitions of telecommunications and information services.

In this post Loper Bright (which eviscerated Chevron deference) ruling, the Sixth Circuit grounded its reasoning in the "major questions doctrine," indicating that because the net neutrality rule addresses an issue of vast economic and political significance, it requires clear congressional authorization—something the FCC's current legislative mandate may not sufficiently provide.

Judge Sutton citing Loper Bright in his concurring opinion and added:

The Commission’s “intention to reverse course for yet a fourth time” suggests that its reasoning has more to do with changing presidential administrations than with arriving at the true and durable “meaning of the law.”

This stay effectively halts the implementation of the net neutrality rules pending further judicial review, with oral arguments scheduled for later in 2024. This ruling is seen as a setback for the FCC, which has been pushing to reassert its authority over broadband providers, arguing that these regulations are essential for maintaining an open and fair internet.

2. Section 230 Litigation

In the past couple of years, various courts have heard cases that challenge the scope of Section 230 of the Communications Decency Act, which provides immunity to online platforms from liability for user-generated content. The cases raised significant questions about the future of Section 230. The outcome of these (and future) cases could shape the liability landscape for social media platforms, influencing how they moderate content and manage user data.

Gonzalez v. Google LLC involved a horrible set of facts. In 2015, ISIS terrorists unleashed a set of coordinated attacks across Paris, France, killing 130 victims, including a 23-year-old U. S. citizen. Her parents and brothers then sued Google, LLC, arguing that Google was both directly and secondarily liable for the terrorist attack that killed Gonzalez. For their secondary-liability claims, plaintiffs alleged that Google aided and abetted and conspired with ISIS, focusing on the use of YouTube (Google owns and operates) by ISIS and ISIS supporters.

The Supreme Court held much (if not all) of plaintiffs’ complaint failed under either its decision in a companion case (Twitter, Inc. v. Taamneh) or the Ninth Circuit’s unchallenged holdings below. Thus, the court declined to address the application of Section 230 to the complaint. Instead, it vacated the judgment below and remanded the case for the Ninth Circuit to consider plaintiffs’ complaint in light of the court’s decision in Twitter.

The more recent ruling from the United States Court of Appeals for the Third Circuit in Anderson v. TikTok, Inc. involves a case where TikTok's algorithm recommended a dangerous video challenge, the "Blackout Challenge," to a ten-year-old girl, who tragically died after attempting the challenge. The girl’s mother sued TikTok and its parent company, ByteDance, Inc., under state law, arguing that TikTok was responsible for her daughter's death due to the platform's recommendation of the harmful video.

The District Court initially dismissed the case, citing Section 230, which generally immunizes online platforms from liability for content posted by third parties. However, the Third Circuit Court reversed and vacated part of the District Court's decision, again remanding the case for further proceedings.

The Third Circuit's decision hinged on the interpretation of Section 230, specifically whether TikTok's algorithmic recommendations constituted the platform's own "expressive activity" and therefore were not covered by the immunity provided by Section 230. The court concluded that TikTok's algorithm, which actively recommends content to users, including minors, based on their demographics and interactions, could be seen as first-party speech rather than merely hosting third-party content. Therefore, the court ruled that Section 230 does not protect TikTok from liability in this case because the lawsuit targeted TikTok's own actions in recommending the harmful content, not just the existence of third-party content on its platform.

This ruling is potentially significant as it challenges the broad immunity typically granted to social media platforms under Section 230, particularly when the platform's own algorithmic choices are in question.

3. Content Moderation

As we discussed at length in our prior reports, Florida and Texas recently enacted laws that seek to regulate how social media platforms moderate content, particularly with respect to political speech.

The federal district courts in both Florida and Texas issued preliminary injunctions against the enforcement of these state laws but were treated differently on appeal.

The Eleventh Circuit upheld the injunction against Florida's law, arguing that the restrictions on content moderation trigger First Amendment scrutiny under the principle of "editorial discretion" and concluded that:

  • the content-moderation provisions were unlikely to survive heightened scrutiny; and
  • the individualized-explanation requirements unduly burdensome and likely to chill platforms' protected speech.

The Fifth Circuit disagreed with the Eleventh Circuit's conclusions and reversed the Texas federal district court’s decision, allowing Texas' law to stand.

These divergent decisions reached the Supreme Court, which in July 2024 remanded the cases back to the lower courts without ruling on the merits but while signaling (to some) that these laws may violate the First Amendment. According to the court:

The parties have not briefed the critical issues here, and the record is underdeveloped. So we vacate the decisions below and remand these cases. That will enable the lower courts to consider the scope of the laws’ applications, and weight the unconstitutional as against the constitutional ones.

In short, the court vacated and remanded the two cases because it determined that neither the Eleventh Circuit nor the Fifth Circuit conducted a proper analysis of the “facial” First Amendment challenges to the Florida and Texas laws regulating large internet platforms.

The Supreme Court's decision underscored the significance of editorial discretion for social media platforms, balancing the need for content moderation with the protection of First Amendment rights. The concurring opinions further addressed the First Amendment challenges posed by the state laws' requirements, ultimately supporting the current injunctions against their enforcement.

In Murthy v. Missouri, the Supreme Court ruled on a case involving the Biden administration's alleged influence on social media platforms regarding content moderation, particularly concerning COVID-19 misinformation. The Court, in a 6-3 decision, found that the plaintiffs (i.e., states) did not have standing to bring their case against government officials. The majority opinion, written by Justice Barrett, emphasized that the link between government pressure and the social media platforms' content moderation policies was too tenuous to establish standing. The decision highlighted that many of the actions taken by social media companies predated government communications and thus could not be directly attributed to government coercion. However, Justice Alito, dissenting, argued that the case should proceed to address significant First Amendment concerns, particularly the potential for government overreach in influencing speech on private platforms.

Subsequently, in Kennedy v. Biden, the district court ruled that Robert F. Kennedy, Jr. did have standing in case where evidence was presented that shortly after President Biden assumed office, White House personnel contacted Facebook (now Meta) to have postings by Kennedy removed related to Covid-19. The Louisiana court found that Kennedy has standing (unlike the states in Murthy) and that he is “likely to succeed on his claim that suppression of content posted was caused by actions of Government Defendants, and there is a substantial risk that he will suffer similar injury in the near future.”

A different type of “content moderation” issue involves copyright infringement claims for information displayed on the Internet. In Cox v. Sony, large ISPs recently filed an amicus curiae brief before the Supreme Court arguing that ISPs shouldn't be forced to aggressively police copyright infringement on broadband networks. The ISPs are worried about potential financial liability from such claims. Part of their argument is that ISPs act as “common carriers” and should not be liable for such infringements. The ISPs argue that the Fourth Circuit’s theory that an ISP acts culpably whenever it knowingly fails to stop a bad actor from exploiting its service is contrary to common law that does not support such boundless liability.

TikTok is in long-running dispute with the United States. TikTok is facing being banned in the US if it does not change its ownership. The concern of policymakers is that the company (and its parents) have ties to China that they claim presents privacy and security risks to US citizens and the nation. TikTok is raising First Amendment and other arguments in federal court.

These rulings and cases underscore the ongoing tension between government regulation and free speech rights, particularly in the digital age where social media platforms play a pivotal role in public discourse.

4. Children and Social Media

Somewhat reminiscent of policymakers’ earlier wrestling with children’s access to cable television programming, the nearly all-consuming use of social media by the public, especially children, has presented new issues. Policymakers and others are concerned with the potential toll inflicted on children’s overuse of social media (and potentially outweighing any perceived benefits). One recent case was decided in Utah.

In NetChoice v. Reyes and Zouleck et al. v. Haas (combined in the same ruling), the court addressed Utah’s enactment of the Utah Minor Protection in Social Media Act. The Act, scheduled to take effect on October 1, 2024, seeks to protect young Utahans’ mental health and personal privacy by requiring social media platforms to verify users’ ages and impose special restrictions on minors’ accounts. First Amendment and Due Process claims were brought in these two cases by a trade association and youth organizations/individuals. Judge Shelby ruled that NetChoice is substantially likely to succeed on its claim the Act violates the First Amendment and granted its request for a preliminary injunction. But the court found the Zoulek plaintiffs had not sufficiently alleged their standing to challenge the Act’s constitutionality and denied their request for a preliminary injunction.

The court concluded that the Act targets speech by requiring social media platforms to verify users' ages and impose restrictions on minors' accounts, amounting to content-based regulation. The court held that defendants (i.e., state) had failed to demonstrate that the Act serves a compelling state interest in a way that justifies the Act’s restrictions.

5. Universal Service Fund

In our previous reports, we have discussed numerous attempts by a group of plaintiffs known as “Consumers Research,” via their filing of appeals of FCC Universal Service funding decisions in several U.S Circuit Courts of Appeal, to have the FCC’s Universal Service Fund practices declared unconstitutional under the public non-delegation doctrine and the private non-delegation doctrine. In Consumers’ Research, the Fifth Circuit reviewed the longstanding federal policy of universal service. The FCC’s current approach to Universal Service funding was established by new regulations created as the result of the Telecommunications Act of 1996. Although Congress authorized the FCC to define “universal service,” the FCC does not administer all of its universal service programs itself. Instead, most of the programs are administered by a private nonprofit company, the Universal Service Administrative Company (“USAC”). USAC’s sole shareholder is the National Exchange Carrier Association, a private non-profit run by industry representatives.

The Fifth Circuit majority, departing from the prior circuit courts that had rejected Consumer Research’s legal claims, ruled:

FCC then subdelegated the taxing power to a private corporation. That private corporation, in turn, relied on for-profit telecommunications companies to determine how much American citizens would be forced to pay for the “universal service” tax that appears on cell phone bills across the Nation. We hold this misbegotten tax violates Article I, § 1 of the Constitution.

The FCC is likely to pursue review at the Supreme Court.

C. Legislative Developments

1. BEAD

The Broadband Equity, Access, and Deployment (BEAD) Program, established under the Infrastructure Investment and Jobs Act (IIJA) of 2021, represents the largest federal investment in broadband infrastructure to date, with a total of $42.45 billion allocated to expanding high-speed internet access across the United States. The program aims to bridge the digital divide by funding planning, infrastructure deployment, and adoption programs in all 50 states, Washington D.C., Puerto Rico, and various U.S. territories.

In 2024, the focus of the BEAD program has shifted towards implementation, with states working on submitting and refining their initial proposals for how they will utilize the allocated funds. The National Telecommunications and Information Administration (NTIA) is responsible for approving these proposals, which outline each state's strategy to provide high-speed internet to unserved and underserved areas. As of now, all eligible entities have submitted their proposals, and the NTIA is in the process of reviewing them to ensure compliance with federal guidelines and goals.

The BEAD program also incorporates significant requirements regarding environmental and historical preservation compliance, as well as "Buy America" provisions, although there have been some waivers to these requirements to ensure timely execution of projects.

Recently the NTIA issued proposed guidelines on use of “alternative technologies” under BEAD program in the hardest-to-reach locations. The NTIA's notice follows arguments of wireless advocates regarding BEAD’s “fiber focus.” NTIA's BEAD program director, stated that fiber is still "the gold standard" and that fiber builds remain the "priority broadband projects for BEAD." He added that if fiber is too expensive, the next priority would be other "reliable" technologies, like "coaxial cable or licensed fixed wireless," rather than "alternative" technologies, such as unlicensed fixed wireless access (uFWA) and low-Earth orbit (LEO) satellite.

The BEAD program has also drawn scrutiny by the House Energy and Commerce Committee and the Communications and Technology Subcommittee in terms of speed of implementation. Issues were raised at a hearing such as by Chairman Latta raising concerns about BEAD implementation such as:

  • First, this program was created outside of regular order, and therefore lacks appropriate provisions to safeguard these taxpayer dollars.
  • There was no discussion of whether $42 billion is the right amount to connect every American or debate on how this program should be administered.
  • The infrastructure bill was also a missed opportunity to enact meaningful permitting reform that would have broken down barriers to deployment and stretched this federal funding further.

Perhaps a reflection of the delay caused by this NTIA/state broadband office hybrid approach –it is worth noting that since the IIJA was signed by President Biden in 2021 no BEAD funds have been spent on a qualifying project to date. This has prompted significant criticism from many stakeholders such as FCC Commissioner Carr.

Commissioner Carr testified on September 19, 2024, before the House Oversight and Accountability Committee stating in part:

It has now been 1,039 days since the $42 billion program was signed into law. After all of that time, not one person has been connected to the Internet with those dollars—not one home, not one business. Indeed, not even one shovel worth of dirt has been turned with those dollars. And it gets worse. The Biden-Harris Administration recently confirmed that no construction projects will even start until sometime next year at the earliest and in many cases not until 2026.

Given the amount of funding involved, the high degree of attention surrounding broadband availability and affordability, as well as political implications, BEAD will be a significant issue to monitor.

2. California AI Law

California has been extremely active with AI related bills. This includes three pieces of legislation regarding “political deepfakes.” AB 2655 requires large online platforms to remove or label deceptive and digitally altered or created content related to elections during specified periods and requires them to provide mechanisms to report such content. AB 2839 expands the timeframe in which a committee or other entity is prohibited from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content. AB 2355 requires that electoral advertisements using AI-generated or substantially altered content feature a disclosure that the material has been altered.

Governor Gavin Newsom also signed two bills to help actors and performers protect their digital likenesses in audio and visual productions, including those who are deceased. This legislation will help ensure the responsible use of AI and other digital media technologies in entertainment by giving workers more protections. AB 2602 requires contracts to specify the use of AI-generated digital replicas of a performer’s voice or likeness, and the performer must be professionally represented in negotiating the contract. AB 1836 prohibits commercial use of digital replicas of deceased performers in films, TV shows, video games, audiobooks, sound recordings and more, without first obtaining the consent of those performers’ estates.

California Senate Bill 1047 (SB 1047), also known as the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," is a legislative proposal that aims to establish stringent safety and security regulations for the development and deployment of advanced AI systems in the state. Highlights from the bill:

  • Scope of Regulation: The bill specifically targets "covered AI models," defined as those requiring significant computational resources for training (greater than 10^26 operations) or models similar in performance to cutting-edge foundation models. This high threshold means that only the most advanced and resource-intensive AI systems are subject to the bill's requirements.
  • Safety Assessments and Shutdown Capability: Developers of covered AI models must perform rigorous safety assessments before beginning model training to ensure the AI does not pose a public safety risk. The bill mandates that these developers implement the capability to promptly shut down any AI system that fails to meet safety standards.
  • Third-Party Testing and Reporting: Developers are required to allow third-party testing of their AI models and report any safety incidents to the newly created Frontier Model Division within 72 hours.
  • Whistleblower Protections: The bill includes provisions to protect employees who report unsafe practices or violations, encouraging transparency within AI development organizations.
  • Oversight and Enforcement: The bill proposes the establishment of the Frontier Model Division, tasked with overseeing compliance, issuing guidance, and advising on AI safety matters. The division would have the authority to revise the thresholds defining covered models and could recommend civil penalties for non-compliance.

SB 1047 has sparked significant debate, particularly in Silicon Valley. Proponents argue that the bill is necessary to prevent potential harms from powerful AI systems, while opponents, including major tech companies, warn that the bill could stifle innovation, particularly in open-source AI development and among startups, due to the high compliance costs and strict regulations.

On September 29, 2024, Governor Newsom announced that he had vetoed SB 1047. In his veto message, the Goveronor commented that:

While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.

The Governor noted in his press release that he had signed 17 bills related to AI. He explained that he has asked experts on AI to help California develop guardrails for deploying AI, focusing on developing an empirical, science-based analysis of capabilities and risks. The Governor stated that he will continue to work with the legislature on during its next session.

3. Other State AI legislation

In the 2024 legislative session, at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills, and six states, Puerto Rico and the Virgin Islands adopted resolutions or enacted legislation. Examples of those actions include:

  • Colorado required developers and deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination and mandated disclosures to consumers.
  • Florida provided for grants to school districts to implement AI in support of students and teachers.
  • Indiana created an AI task force.
  • Maryland adopted policies and procedures concerning the development, procurement, deployment, use and assessment of systems that employ AI by units of state government.
  • South Dakota clarified that a person is guilty of possessing child pornography if the person knowingly possesses any visual depiction of a minor engaging in a prohibited sexual act, or in a simulation of a prohibited sexual act, or any computer-generated child pornography. A violation of the revised law is a Class 4 felony.
  • Tennessee required the governing boards of public institutions of higher education to promulgate rules and required local education boards and public charter schools to adopt policies, regarding the use of AI by students, teachers, faculty and staff for instructional purposes.
  • Utah created the Artificial Intelligence Policy Act.
  • The Virgin Islands established a real-time, centralized crime data system within the territorial police department.
  • Washington appropriated funds for the city of Seattle to lease space for nonprofit and academic institutions to incubate technology business startups, especially those focusing on AI and develop and teach curricula to skill up workers to use AI as a business resource.
  •  West Virginia created a select committee on AI.

4. Children and Social Media legislation

At least 30 states and Puerto Rico have pending legislation in 2024. Thirty bills have been enacted including:

  • Colorado required the Department of Education to create and maintain a resource bank of existing evidence-based, research-based scholarly articles and promising program materials and curricula pertaining to the mental and physical health impacts of social media use by youth, internet safety and cybersecurity.
  • Florida required a commercial entity that knowingly and intentionally publishes or distributes material harmful to minors on a website or application, if the website or application contains a substantial portion of material harmful to minors, must use either anonymous age verification or standard age verification to verify that the age of a person attempting to access the material is a certain age or older and prevent access to the material by a person younger than a certain age.
  • Georgia required the Department of Education to develop and periodically update model programs for educating students regarding online safety and provided for inclusion of parental measures and controls in such technology protection measures.
  • Kentucky required sex offenders who have committed a criminal offense against a victim who is a minor to display their full legal name on social media platforms.
  • Louisiana declared that minors are to be protected in the online environment and that interactive computer services shall be discouraged from contracting with minors without the consent of a legal representative.
  • Maryland required a covered entity, as defined, that offers an online product reasonably likely to be accessed by children to complete a certain data protection impact assessment under certain circumstances; requires certain privacy protections for certain online products; prohibits certain data collection and sharing practices.
  • Minnesota provides compensation for minors appearing in internet content creation.
  • Tennessee created the Protecting Children from Social Media Act; provides that if an individual is a minor, then a social media company must verify the express parental consent for the minor to become an account holder.
  • Utah enacted the Utah Minor Protection in Social Media Act to require social media companies verify a new account holder's age using an approved system. The bill requires a social media service to enable maximum default privacy settings on a state minor account holder's account, provide supervisory tools and verifiable parental consent mechanisms on a state minor account holder's account and provide confidentiality protections for the minor's data.
  • Virginia prohibited operators, defined in the bill, of websites, online services, or online or mobile applications from collecting or using the personal data of users they know are younger than the age of 18 without consent and prohibits the sale or disclosure of the personal data of such users.
  • West Virginia required the state Board of Education to, in collaboration with law enforcement agencies and other entities with experience in child online safety issues and human trafficking prevention, develop a Safety While Accessing Technology education program for elementary and secondary school students in the state.

New York recently enacted two laws designed to protect minors, one called SAFE for Kids Act regarding restrictions on addictive social media feeds for minors and another bill to protect the privacy of children. The SAFE Act prohibits social media platforms from sending notifications regarding addictive feeds to minors from 12:00 a.m. to 6:00 a.m. without parental consent.

On September 1, 2024, in Texas the Securing Children Online Through Parental Empowerment Act, went into effect. The Texas legislature passed H.B. 18 last year to restrict

children from seeing harmful material on the internet, such as content promoting self-harm or
substance abuse, while also giving parents more power to regulate what their child does online.

Gov. Newsom recently signed legislation that directs school districts in California to draft and implement policies that will limit students' use of cell phones during the school day. A.B. 3216, the Phone-Free School Act, requires every school district, charter school and county office of education to form their own set of guidelines by July 1, 2026.

5. State Data Privacy Laws

Multiple states have introduced or strengthened comprehensive data privacy laws. States like Texas, Florida, and Oregon, among others, passed new data privacy regulations, each with unique applicability thresholds and requirements. These laws often target large companies processing significant amounts of personal data and include specific provisions related to the sale of personal data and targeted advertising.

As indicated, New York passed a bill to protect children’s online personal data. The privacy legislation prohibits online sites and connected devices from collecting, using, sharing or selling personal data of anyone under the age of 18, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website.

Conclusion

The above report clearly demonstrates that the legal arena of Communications remains interesting, vibrant, and challenging. Old issues like universal service are being revisited anew new issues such as how to handle issues related to AI present new puzzles to solve. All of these issues impact the daily lives of everyone and represent a rewarding area of legal practice.

    Authors