Anthropic
September 18, 2024
Anthropic Unicorn News - September 18, 2024
California Governor Gavin Newsom has signed new laws to regulate AI misuse, including measures against political deepfakes and unauthorized digital replication of Hollywood actors. These laws aim to balance innovation with accountability in the AI sector.
(
) Introduction(
) California Implements New Laws to Combat AI Misuse and Protect Digital Integrity(
) Advancements in AI Tools and Regulations for Effective CommunicationCalifornia Governor Gavin Newsom has enacted several new laws to regulate the misuse of AI technologies. Key legislation includes AB 2655, which mandates large online platforms like Facebook, X, and TikTok to remove or label materially deceptive content related to state elections, and AB 2355, which requires the labeling of political ads that use AI for transparency. Additionally, AB 2602 and AB 1836 protect actors and deceased personalities from unauthorized digital replication, safeguarding their rights and images. Governor Newsom also expressed concerns about SB 1047, a bill that requires AI companies to implement safety measures to avoid legal liability for significant harm caused by their technologies, fearing it might hinder innovation and affect California's competitiveness in the AI sector. In a related development, companies such as Anthropic, Adobe, Cohere, Microsoft, and OpenAI have committed to responsibly sourcing their datasets to prevent image-based sexual abuse. These companies will incorporate feedback loops and iterative stress-testing strategies to ensure their AI models do not produce harmful content and will remove nude images from AI training datasets when appropriate. These actions are part of voluntary measures announced by the Biden-Harris administration to combat non-consensual intimate images and sexually explicit material involving children.