At just 15 years old, Sneha Revanur, a resident of San Jose, California, organized a grass-roots effort against a 2020 statewide ballot measure that would have allowed for judges to use algorithms to determine whether defendants should be granted release before their court dates, or required to post bail. Supporters of the proposal argued that it was a step towards reforming a broken cash bail system, but those opposed to it, like Ms. Revanur, warned that it could entrench systematic biases hidden within the algorithm.
The Evergreen Valley High School sophomore rallied some of her classmates, and students from other schools in her community, and together they launched Encode Justice, intended at the time to be a single-issue campaign focused on helping to defeat the ballot measure. But after the measure was defeated, the movement continued, as an organization to advocate for safe and equitable AI.
Gen Zers are challenging the notion that their generations are defined solely by consumption of technology. They are driving forces in the conversations about AI governance, participating in AI conferences and roundtables, and hearings at the state, federal and even international level.
Addressing Harmful AI
Encode’s executive team of approximately ten individuals are in college or recently graduated from college. The bulk of their work is direct lobbying for bills. Embedded in their effort is a belief that all present and potential harms posed by AI should be addressed now, in order to better protect society. Encode also has hundreds of national and international volunteers who help with educational campaigns.
“Whether we’re discussing the more immediate impacts of AI including the scourge of revenge porn in schools, or the potentially catastrophic but still unrealized threats that AI poses, it’s our generation that is going to bear the consequences and experience the greatest impacts,” said Ms. Revanur, in an interview with Techstrong. She is currently a student at Stanford University. “It’s really important for us to engage in this conversation as early as possible and for us to build political will to pressure policymakers to pass guardrails now.”
She said that guardrails are critical given the spectrum of AI risks from potential labor displacement to cyberattacks, to the integration of AI into military systems without appropriate human guidance or control. On December 23, 2024, President Joe Biden signed into law the 2025 National Defense Authorization Act, which includes Section 1638, this country’s first policy governing the use of AI in nuclear command, control and communications. Encode says it played an important role in helping to develop key aspects of the provision.
“Until today, there were zero laws governing AI use in nuclear weapons systems,” said Sunny Gandhi, Vice President of Political Affairs at Encode. “This policy marks a turning point in how the U.S. integrates AI into our nation’s most strategic asset.”
Encode has worked closely with the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) on codifying the AI Safety Institute, created under NIST to create standards for AI developers, said Adam Billen, Encode’s Vice President of Public Policy, in an interview with Techstrong. “And just this past year we worked on Senate Bill 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) in California, which would require the largest AI developers, companies like Meta, and the OpenAI’s of the world, to have safety and security protocols, and some whistleblower protections and a number of other things,” Mr. Billen said.
Take it Down Act
Encode also has been busy advocating for the Take it Down Act and the Defiance Act, both of which focus on strengthening penalties against deepfake porn. Those Acts were recently passed by the U.S. Senate and are currently being considered in the U.S. House. Mr. Billen said that 15% of high school students nationwide know of a fellow student in their school who has been a victim of deepfake porn. “It is something that has the potential to get out of control quickly,” he said.
In a 2023 survey by Common Sense Media and Hopelab, of 1,274 people ages 14-22, 41% of respondents believed that generative AI was likely to have both positive and negative impacts on their lives through 2033. “Those who expect mostly positive personal impacts from the future of generative AI describe how broader access to information will help with school, work and their wider community; enhance creativity; and foster opportunities for human advancement. Young people anticipating mostly negative personal impacts highlight concerns about the future of generative AI related to the loss of jobs, AI taking over the world, intellectual property theft, misinformation/disinformation, and privacy.”
Yuna Yang, the Deputy Director of the Information and Communication Technology (ICT) Policy Division at the Ministry of Science and ICT for the Republic of Korea, said during the AI for Good Global Summit this past summer in Geneva, Switzerland, that youth should play a critical role in the development and governance of AI.
“The changes for the future, and the future of technology, is our future, so for this reason, youth should be included in the process of decision-making, but however, most of the time, we are not part of the decision-making, and our voices are under-represented,” Ms. Yang said. “In South Korea, for example, we have tried to change our process…” She said the government has set up an online platform to collect comments from young people regarding AI.
Copyrights related to AI is a big concern for the youth, with the rise of generative AI, she said. With many people using generative AI for artwork, there is concern that their works can receive copyright protection. “It’s not only a problem for South Korea, it’s a problem for countries all around the world.”