Generative AI was the new cool kid on the block at the annual high-profile World Economic Forum (WEF) in the Alpine village of Davos, Switzerland, where AI was greeted with awe tempered by anxiety.

The consensus at the yearly conclave of world political and economic leaders (January 15-19) is that AI is arriving with Borg-like inevitability. “If you embrace artificial intelligence, you will be complete. If you do not and you’re late, you’ll be finished. And if you reject it altogether, you will be completely finished,” bluntly stated Omar Sultan Al Olama, United Arab Emirates Minister of State for Artificial Intelligence.

For many, AI is a beacon of hope. “AI can solve really hard, aspirational problems people maybe are not capable of solving,” said Daphne Koller, founder and CEO of Insitro Inc, emphasizing AI’s potential role in health, agriculture and climate change at one of the multitude of sessions devoted to AI.

And while AI is a friend with benefits in the eyes of many, not everyone is convinced that AI is a reliable partner. “The jury is still out on how and in what manner AI can change the direction of progress or sustainable development goals,” said Amandeep Singh Gill, the United Nations Secretary-General Envoy on Technology.

While Davos attendees see AI having a widespread impact akin to mobile phones, alongside the generally positive vibes worries about unintended consequences are percolating. Foremost among them is AI-induced misinformation in a year that is seeing as many as 50 key elections across the globe. While the environmental impact of climate change was number one on the list of the WEF’s Global Risks Report, the creation of AI misinformation using voice cloning, deepfakes and other techniques to erode democracy and polarize society was a close second.

Generative AI models like ChatGPT means the creation of content designed to manipulate people can now be done without needing any specialized skills. For its part, ChatGPT maker OpenAI now plans to better police users planning to use ChatGPT in a malicious manner, noting that it will watermark images to make it easier to trace the origin of images created using its DALL-E image generator. And while others are using AI tags as well, the proliferation of generative AI models will make it a challenge to consistently catch bad actors seeking to influence elections. Without voluntary self-regulation, legislation governing AI is likely and may be beneficial in establishing a layer of trust in AI among the general public. More than a third of U.S. states have passed or introduced laws banning the use of deepfakes in political campaigning.

AI regulation received an endorsement from Satya Nadella, CEO of Microsoft and prime backer of OpenAI. “Regulation that allows us to ensure that the broad societal benefits are amplified and the unintended consequences are dampened is going to be the way forward,” Nadella told a Davos audience excited by AI’s potential but also wary of adverse outcomes.

“We have to take the unintended consequences of any new technology along with the benefits as opposed to waiting for the unintended consequences to show up and then address them,” said Nadella in favoring a pro-active approach while at the same time raising the profile of the “unintended consequences” discussion. Regulations might target specific AI applications like a new medical device, added Nadella.

The Microsoft CEO is in lockstep with the IMF which believes a regulatory framework for AI is critical. The IMF goes a step further recommending a comprehensive social safety net “ensuring social cohesion is paramount.” Whether legislators can keep up is an open question. The speed and effectiveness of AI regulation, however, “is unlikely to match the pace of development,” notes WEF’s report.

Elections aren’t the only area vulnerable to AI misinformation, according to the Global Risks Report, citing stock market manipulation and conflict escalation as two examples.

“In the next two years, a wide set of actors will capitalize on the boom in synthetic content amplifying societal divisions, ideological violence and political repression—ramifications that will persist far beyond the short term,” reads the report.

Adding to that concern were the results of a survey released at Davos by the giant PR firm Edelman which concluded that innovation in Western democracies is being managed badly and is increasing polarization, with people holding right-wing beliefs more likely to resist innovation. “Trump’s ghost stalks Davos,” as Politico headlined.

Longer term, generative AI’s impact on jobs, inequality and “social cohesion” is a top worry highlighted by an International Monetary Fund (IMF) report issued as WEF began. “Roughly half the exposed jobs may benefit from AI integration,” says the IMF, while acknowledging the difficulty of forecasting. “For the other half, AI applications may execute key tasks performed by humans, which could lower labor demand, leading to lower wages and reduced hiring. In the most extreme cases, some of these jobs may disappear.”

Those extreme cases may become more common than the IMF anticipates. A recent Mercer survey found that nearly a third of chief executive and chief financial officers were using AI to redesign work to reduce their dependency on people.

A large number of CEOs, conversely, are worried that AI will mean an end to their own companies. In a survey released at Davos by the large consulting firm PwC, 45 percent of executives thought their own businesses wouldn’t be viable in 10 years without radical reinvention. A lack of AI skills among employees is a key challenge, pointing toward the need for AI training programs.

“Whether it is accelerating the rollout of generative AI or building their businesses to address the challenges and opportunities of the climate transition, this is a year of transformation,” said Bob Moritz, global chairman of PwC, formerly known as Price Waterhouse.