voting, elections and election news

California Gov. Gavin Newsom signed three new laws Wednesday requiring social media companies to limit the flow of election-related deepfakes, offering a blueprint for the rest of the nation on AI regulation as it nears the 2024 presidential election.

Only one of the laws is in effect for the election Nov. 5. It bars the creation and publishing of deepfakes 120 days before Election Day and 60 days after — a strong indication that lawmakers are concerned about the spread of misinformation as votes are counted.

“With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election,” state Assemblymember Gail Pellerin, author of the law banning election deepfakes, said in a statement. “California is taking a stand against the manipulative use of deepfake technology to deceive voters.”

Another law, the nation’s first of its kind, mandates large social media platforms remove deceptive material. Newsom signed a third bill, the “Defending Democracy from Deepfake Deception Act,” to forces social media platforms and other sites with more than 1 million users in California to label or remove deepfakes within 72 hours following a complaint. Both laws takes effect in January.

But perhaps the most anticipated AI bill that would require tech companies to test the safety of powerful AI tools before making them publicly available, awaits Newsom’s signature or veto by Sept. 30. It is heavily opposed by Silicon Valley and many of its Congressional representatives.

AWS

The flurry of legislation in California on Wednesday comes amid no national law to oversee the spread of powerful AI technology as it grows smarter and begins to assimilate the reasoning of humans, giving rise to fears it will displace wide swaths of jobs as well as hyper-accelerate misinformation and cybercrime.

“As technology evolves, so should our legal frameworks to ensure innovation enhances rather than undermines our democratic processes,” Alon Yamin, co-founder and chief executive of Copyleaks, an AI-based text analysis platform, said in an email. “Yet, even with these laws in place, the risk of loopholes and misuse of AI technology remains, underscoring the need for available detection tools to the general public that help provide insight into what is AI-generated and what isn’t so that they can have peace of mind and the opportunity for complete transparency.”

X owner Elon Musk has stoked fears with a torrent of deepfakes on the platform formerly known as Twitter, including fake audio of Democratic president candidate Kamala Harris calling herself the “ultimate diversity hire.”

As fears mount in the weeks leading up to the presidential election, California for now is blazing an early trail for responsible AI use.

“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation — especially in today’s fraught political climate,” Newsom said in a statement.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY