
[ad_1]
Days after Vice President Kamala Harris announced her candidacy for president, a video created by artificial intelligence went viral.
“I … am your Democratic presidential candidate because Joe Biden finally exposed himself as a senile delinquent in the debate,” a voice that sounds like Harris says in a fake audio track used to modify one of her campaign ads. “I was chosen because I am the ultimate diversity hire.”
Billionaire Elon Musk, who has supported Harris’ Republican opponent, former President Trump, shared the video on X and clarified two days later that it was a parody. His initial tweet received 136 million views. He later called the video a parody, which has 26 million views.
For Democrats, this includes California Governor Gavin NewsomThe incident was no laughing matter, sparking calls for greater regulation against AI-generated videos with political messages and sparking a new debate about the appropriate role of government in curbing emerging technologies.
California lawmakers on Friday gave final approval to a bill that would ban the distribution of deceptive campaign ads, or “election communications,” within 120 days before an election. Parliament Bill 2839 The target is manipulated content that would damage a candidate’s reputation or electoral prospects and confidence in the outcome of the election. The bill is designed to address issues like the Harris video shared by Musk, though it also makes exceptions for parody and satirical videos.
“We are watching California head into its first ever election during which disinformation driven by generative AI will pollute our information ecosystem like never before, and millions of voters will not know which images, audio or video they can trust,” said Rep. Gale Pellerin (D-Santa Cruz). “So we have to do something.”
Newsom He said he would sign the billThe bill would take effect immediately, in time for the November election.
The bill updates a California law that bars people from distributing deceptive audio or video media intended to discredit a candidate or deceive voters within 60 days of an election. State lawmakers said the law needed to be strengthened after digitally altered videos and photos, known as deepfakes, were posted in large numbers on social media during the election cycle.
The use of deepfakes to spread disinformation has worried lawmakers and regulators in past election cycles. Those concerns have intensified after the release of new AI tools, such as chatbots that can quickly generate images and videos. From fake automated calls to fake celebrity endorsements of candidates, AI-generated content is testing tech platforms and lawmakers.
Under AB 2839, candidates, election boards or election officials can seek court orders to take down deepfake content. They can also sue people who disseminate or repost deceptive material for damages.
The legislation also applies to deceptive media published within 60 days of an election, including content that falsely describes voting machines, ballots, polling locations or other election-related property in a way that could undermine confidence in the results of the election.
It does not apply to satire or parody works that are labeled as such, nor does it apply to radio stations that inform viewers that what is depicted does not accurately represent a speech or event.
Tech industry groups oppose AB 2839 and other bills that target online platforms that fail to properly review deceptive election content or flag AI-generated content.
“It will result in the suppression and obstruction of constitutionally protected free speech,” said Carl Szabo, vice president and general counsel of NetChoice, whose members include Google, X and Snap, as well as Facebook parent Meta and other tech giants.
Online platforms have their own rules on manipulated media and political advertising, but their policies can vary.
Unlike Meta and X, TikTok does not allow political ads and has said it may remove even those with the label AI-generated content If it depicts a public figure such as a celebrity, “when used for political or commercial endorsements.” Truth Social, the platform Trump created, does not mention manipulated media in its rules about prohibited content on its platform.
Federal and state regulators have begun cracking down on AI-generated content.
In May, the FCC proposed a $6 million fine against Democratic political consultant Steve Cramer, who used artificial intelligence to make automated calls imitating President Biden’s voice. The fake calls discouraged people from participating in the Democratic presidential primary in New Hampshire in January. Cramer told NBC News He orchestrated the call to draw attention to the dangers posed by artificial intelligence in politics while also facing criminal charges of felony voter suppression and misdemeanor candidate impersonation.
Saab said existing laws are adequate to address concerns about election deepfakes. NetChoice has sued states to stop some laws designed to protect children on social media, alleging they violate free speech protections under the First Amendment.
“Just having a new law is not going to stop bad behavior,” Sabo said. “You actually need to enforce the law.”
More than two dozen states, including Washington, Arizona and Oregon, have enacted, passed or are in the process of developing legislation to regulate deepfakes, according to the consumer advocacy nonprofit citizen.
In 2019, California enacted a law aimed at combating manipulated media after Video that makes House Speaker Nancy Pelosi look drunk goes viral on social media. Enforcing the law has been a challenge.
“We really have to tone it down,” said Rep. Marc Berman (D-Menlo Park), who is bill“It’s brought a lot of attention to the potential risks of this technology, but I’m afraid that at the end of the day it’s really not going to be very useful.”
Daniel Citron, a professor at the University of Virginia School of Law, said political candidates might not pursue legal action, but instead choose to debunk deepfake videos or even ignore them to limit their spread. By the time they pursue a case through the legal system, the content may have gone viral online.
“These laws are important because they send a message. They teach us something,” she said, adding that they let people who share deepfakes know there is a cost to sharing deepfakes.
This year, lawmakers worked with the California Tech and Democracy Initiative, a project of the nonprofit California Common Cause, to develop several bills addressing political deepfakes.
Some of the online platforms targeted are not responsible under federal law for content posted by users.
Berman has introduced a bill that would require online platforms with at least 1 million California users to remove or label certain deceptive election-related content within 120 days after an election. Platforms must take action within 72 hours after a user reports a post. AB 2655The bill, which passed the Legislature on Wednesday, states that platforms would also need to develop procedures for identifying, removing and labeling false content. It also would not apply to parody or satire content, or to news media that meet certain requirements.
Another bill, co-authored by Rep. Buffy Wicks (D-Oakland), would require online platforms to label AI-generated content. While NetChoice and another industry group, TechNet, opposed the bill, ChatGPT maker OpenAI supported it. AB 3211, Reuters report.
However, both bills will not take effect until after the election, highlighting the challenges of passing new laws amid rapid technological advances.
“Part of my hope in introducing this bill is that it will draw attention to this and hopefully it will put pressure on the social media platforms to take immediate action,” Berman said.
[ad_2]
Source link