AI news

Salesforce this week announced it is adding integration with the large language models that Amazon Web Services (AWS) makes available via a managed Amazon Bedrock service.

Announced at the AWS re:Invent 2023 conference, Salesforce is extending the Einstein Trust Layer it has developed to enable its software-as-a-service (SaaS) applications to access artificial intelligence (AI) models to the application programming interface (API) that AWS provides to access both its own Titan LLMs as well as third-party LLMs from providers such as AI21 Labs, Anthropic and Stability AI.

In addition, Salesforce will expand its use of AWS, including compute, storage, data and AI technologies, to enable organizations to deploy Salesforce SaaS applications on public clouds using the Hyperforce framework the company has developed. Salesforce is also now making its SaaS applications available via the AWS Marketplace.

Finally, Salesforce is extending its Data Cloud platform to add support for data sharing across folders, objects and files stored in the Amazon Simple Storage Service (Amazon S3) in a way that can be centrally managed.

The overall goal is to provide Salesforce customers with access to a range of LLMs running on multiple clouds in a way that ensures accurate generative artificial intelligence (AI) results using tools that Salesforce has created to identify hallucinations and toxicity in AI models, says Patrick Stokes, executive vice president for product and industries marketing at Salesforce.

The Einstein Trust Layer also ensures that any Salesforce data used to extend an LLM is not retained by that model, adds Stokes.

Salesforce is moving toward providing a hybrid cloud computing environment that in addition to enabling organizations to deploy its applications on multiple clouds, including data centers managed by Salesforce, also provides access to multiple LLMs. The goal is to ensure maximum flexibility as organizations increasingly employ multiple cloud computing environments, notes Stokes.

In general, organizations are starting to make a distinction between low-risk applications that invoke general-purpose LLMs that might hallucinate and mission-critical applications that need any output from an LLM to be verified. The Einstein Trust Layer ensures that results generated by an LLM that has been exposed to Salesforce data are as accurate as possible, says Stokes.

Rather than building its own foundational LLMs, Salesforce is opting to leverage multiple foundational LLMs that its customers can customize and extend. It’s still early days so far as usage of generative AI is concerned within business processes, but as generative AI continues to evolve organizations will need to frequently pivot as additional LLM advances are made.

Some organizations will also decide to build their own LLMs if they decide they need to maintain total control of their data, notes Stokes.

The Salesforce approach will enable organizations to mix and match LLMs that are optimized for various use cases. “Having a choice of foundational models is going to be critical,” says Stokes.

In effect, Salesforce has created its own orchestration framework for LLMs that organizations that standardize on its SaaS applications can employ to centrally manage LLM access. It’s not clear how many other SaaS application providers will follow suit, but it’s apparent that no matter how powerful any given LLM might be they are also already becoming increasingly disposable.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY