election

Big Tech pledged to help prevent deceptive AI content from interfering with global elections in 2024, a year noteworthy for democratic contests involving 40 elections and four billion people. Twenty companies were signatories to the Tech Accord agreed to at the recently concluded Munich Security Conference and included big names in the AI world like OpenAI, Anthropic, Google , IBM and Meta. Reaction among lawmakers, particularly in the U.S., was muted, as they collectively seemed to view the document as so much toilet paper, due to its voluntary nature.

“The new Tech Accord is a welcomed step by some of the most prominent technology companies in the world and it is a clear signal they understand that powerful AI tools have the potential to cause real and potential irreparable harm,” said Congresswoman Yvette D. Clarke (D-NY) who has emerged as a leading proponent of AI legislation. “Unfortunately, these are only voluntary commitments with no meaningful, enforceable safeguards to ensure the safe and responsible use of AI tools. That’s why world governments, particularly the United States, must act quickly.”

Similarly, Senator Mark Warner (D-VA) stated that while the Tech Accord was a constructive step forward in the right direction, he “will continue to hold the industry accountable.” Senator Brian Schatz (D-HI) said legislation is urgently needed to regulate and label AI-generated content, no matter the context. In Europe, cybersecurity agencies are taking steps to prevent AI disruption of elections in June. And a tough, newly enacted EU Digital Services Act is designed to keep Europeans safe from online disinformation, illegal content and “dark pattern” web designs that encourage people to unwittingly click on online content.

The “Tech Accord To Combat Deceptive Use of AI In 2024 Elections” specified eight commitments:

-Developing and implementing technology to mitigate risks related to deceptive AI election content, including open source tools where appropriate

-Assessing models in scope of this accord to understand the risks they may present regarding deceptive AI election content

-Seeking to detect the distribution of this content on their platforms

-Seeking to appropriately address this content detected on their platforms

-Fostering cross-industry resilience to deceptive AI election content

-Providing transparency to the public regarding how the company is addressing it

-Continuing to engage with a diverse set of global civil society organizations, academics

-Supporting efforts to foster public awareness, media literacy and all-of-society resilience.

Of the eight points, the one addressing transparency may be the most difficult to address as it’s often unclear how collected data is used (as in training sessions) or where it originated, with experts citing chat sessions being especially problematic due to privacy issues. Accord signatories also included Adobe, Amazon, Arm, ElevenLabs, Inflection AI, LinkedIn, McAfee, Nota, Snap Inc., Stability AI, TikTok, Trend Micro, TruePic and X.

Another challenge is developing AI media awareness when it is becoming increasingly difficult to distinguish AI content from reality. “Transparency builds trust. We need to invest in media literacy campaigns to ensure people know they can’t trust everything they see and hear online and that there are tools out there to help them understand what’s true,” said Dana Rao, general counsel and chief trust officer at Adobe, which sounds like Dr. Frankenstein issuing an advisory on how to avoid his monster.

In practical terms, the Tech Accord seemingly needs to be backdated as AI already is figuring in elections around the world. In Pakistan, for example, an AI video of a presidential candidate encouraging people to vote circulated even as the candidate languished in prison. In the U.S., the Federal Communications Commission has made AI-generated voices in robocalls illegal following the cloning of President Biden’s voice in the New Hampshire primary. In a related move, the Federal Trade Commission is issuing a new rule that would protect businesses and individuals from AI impersonators working scams.

The effectiveness of  bad actors mounting  influence campaigns is illustrated in a reported operation attributed to Iranian hackers in which a fake AI news report on events in Gaza, complete with an AI presenter, interrupted the streaming news services of the BBC and others news organizations in the UAE and Canada. Microsoft noted “it was the first Iranian influence operation Microsoft has detected where AI played a key component in its messaging.” The hack was attributed to an Iranian group called Cotton Sandstorm and particularly troubled experts alarmed by the group’s ability to effectively place the AI-generated “agit-prop” in the right place.

Microsoft and OpenAI, meanwhile, note that the malicious use of generative AI is on the rise among state-affiliated hackers who are mostly engaged in querying open-source information, translating, finding coding errors and running basic coding tasks. Among the groups with graphic novel style monikers disrupted were two Chinese-affiliated groups known as Charcoal Typhoon and Salmon Typhoon, an Iran group called Crimson Sandstorm, Emerald Sleet from North Korea and a Russian group called Forest Blizzard.

There’s little doubt that deceptive AI is poised to play an unprecedented role in 2024 elections. “Disinformation campaigns are not new,” notes Christina Montgomery, vice president and chief privacy and trust officer at IBM, “but in this exceptional year of elections, concrete, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content.”

That so many companies now have “trust officers” should be enough to put people on their guard.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Cloud Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY