aws logo

Amazon Web Services (AWS) has added a bevy of artificial intelligence (AI) tools to its portfolio that includes an ability to craft applications based on the prompts an end user creates, along with a memory retention and code interpretation capability that makes it simpler to dynamically combine prompts to automate a workflow.

Announced at an AWS Summit New York event, the latest additions to the AWS portfolio are designed to make the AI capabilities more accessible to end users, developers and IT operations teams. For example, Amazon Q Apps can be spun up via a natural language in minutes.

These capabilities are part of an ongoing effort to democratize AI, says Paul Nashawaty, principal analyst for application development for The Futurum Group. “AI to business transformation was a major theme,” he says.

In fact, Dr. Matt Wood, vice president for AI products at AWS, for example, told conference attendees that in many cases it will now be faster for end users to generate their own Amazon Q applications than it will be for them to search a repository for an application that provides a similar set of capabilities and then learn how to use it. “Q Apps are a form of personalized software,” he says.

For example, AWS is also extending the reach of Amazon Q Developer, an AI agent it built to streamline application development workflows to SageMaker Studio, the tool it provides to make it simpler to build AI models.

At the same time, professional developers using an integrated development environment (IDE) can now customize Amazon Q Developer using their private code base to surface more contextual recommendations to improve code.

AWS is also now previewing a low-code tool that leverages generative AI to make it simpler for IT teams to build applications.
Additionally, AWS is extending Knowledge Bases for Amazon Bedrock to provide connectors for Confluence, Salesforce, SharePoint, and web domains that makes it simpler to invoke retrieval augmentation generation (RAG) techniques to customize large language models. AWS also announced that vector search for Amazon MemoryDB is now generally available.

AWS is also adding Guardrails for Amazon Bedrock, a tool that exposes an application programming interface (API) to detect hallucinations and safeguard applications built using an AI model.

Finally, AWS also touted the progress it is making with Anthropic, a provider of large language models, that can now be customized when accessed via the Amazon Bedrock service. The overall goal is to reduce hallucinations by exposing more relevant data to a large language model (LLM) to generate more accurate outputs.

Anthropic president Daniela Amodei told conference attendees that the company is specifically focused on speed, intelligence and cost. “We’re building AI models for the enterprise,” she says.

There are, of course, no shortage of LLMs to choose from today, and many organizations are going to be concerned about becoming locked into a specific generative AI platform. The issue, as always, is finding a way to swap LLMs in a way where the cost of switching doesn’t exceed the benefits attained.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY