AI news

Salesforce at its Trailblazer DX conference made generally available an Einstein 1 Studio tool that makes use of generative artificial intelligence (AI) to write code within the context of a low code application development platform.

Einstein 1 Studio is designed to make it possible for developers to customize Einstein Copilot, which uses Salesforce Einstein 1 Data Cloud to expose large language models (LLMs) that generate outputs based on vetted data.

Developers can create their own reusable actions using a Copilot Builder tool, available in beta, in addition to creating custom prompts and being able to leverage a model builder tool to import AI models from, for example, Anthropic, Microsoft, Cohere, Databricks and Google. Support for fine-tuning LLMs such as Amazon Bedrock, Google Vertex AI and OpenAI LLMs will be made available later this year.

Einstein Copilot provides developers using Einstein 1 Studio with access to an underlying reasoning engine that interprets intent from the prompt, selects the most suitable actions needed and then completes them. In addition, the audit trail and feedback data collected from AI prompts and responses can be stored in Salesforce Data Cloud, where it can generate alerts to further automate workflows using other Salesforce tools and platforms.

The end result is a development platform that integrates the user interface and a variety of AI models and data in a platform with privacy, data security and compliance protocols that are maintained via an Einstein Trust Layer that Salesforce applies to any LLM that is integrated with its platform.

The ultimate goal is to make the development of AI applications more accessible to a much wider range of developers in a way that organizations trust, says Clara Shih, CEO of Salesforce AI. “It’s all on a single meta-driven platform,” she says.

That’s become a pressing issue, and a survey of 600 IT professionals conducted by Salesforce finds 88% reporting they’re unable to support the deluge of AI-related requests they receive at their organization. Well over two-thirds (68%) said their leadership expects them to be experts in generative AI, with 90% also noting generative AI has forced them to re-evaluate their technology strategy.

A full 86% said their job has become more important since the introduction of generative AI.

However, nearly half (48%) admit they struggle to find a balance between speed, business value and security when implementing new technology. Additionally, almost one-third (31%) said they lack the time to implement and train AI models and algorithms.

Nearly half (48%) are also concerned their organization’s security infrastructure can’t keep up with demand for innovation, while 45% noted they worry data management infrastructure won’t keep up with demand for innovation.

It’s not clear to what degree organizations will rely on low code or no code tools to embed generative AI capabilities into workflows versus simply relying on a chat interface. Regardless of approach, the one thing that is certain is it’s only a matter of time before generative AI is pervasively embedded across almost every workflow. The challenge now is determining how best to achieve that goal, given all the potential use cases that already exceed the amount of available expertise.