AI News

Amazon Web Services (AWS) today expanded the capabilities of the platforms it provides to make building artificial intelligence (AI) models simpler, while simultaneously enabling organizations to reduce costs.

Announced at the AWS re:Invent 2024 conference, AWS at the same time launched a revamped platform for building and deploying AI models and application, rechristened Amazon SageMaker AI, that includes a preview of a Unified Studio and Amazon Bedrock integrated development environment (IDE) to streamline development workflows.

In addition, AWS is previewing an integration between the Amazon Q Developer generative AI tool and Amazon Sagemaker Canvas that makes it possible to use natural language to build an AI model.

At the same time, AWS is reducing cost by previewing an ability to cache prompts in a way that can reduce cost by up to 90% in addition to an Amazon Bedrock Intelligent Prompt Routing service that more efficiently routes requests to the model that is most likely to provide the best performance and quality at the level of cost an organization finds acceptable.

Dr. Swami Sivasubramanian, vice president for data and machine learning services for AWS, told conference attendees these and other new capabilities are arriving at a pivotal moment for organizations as they increasingly seek to operationalize AI models. Cost, energy, scalability and performance concerns all now need to be simultaneously addressed, he notes. “We’re reaching a critical inflection point,” says Sivasubramanian.

AWS, as part of that effort, is now making available within its managed Amazon SageMaker platform for building and deploying AI models an Amazon SageMaker HyperPod plan that makes it possible to prioritize which AI models are allowed access to compute resources. In addition, a task governance manager enables organizations to set quotas that limit consumption of IT infrastructure resources. Those capabilities can help organizations reduce costs by as much as 40%, says Sivasubramanian.

AWS is also previewing a managed Amazon Bedrock Data Automation service for automating document processing, analysis and retrieval-augmented generation workflows. In addition, Amazon Bedrock Knowledge Bases, in preview, can now process multimodal data along with graph and structured data. AWS is also previewing a service that converts unstructured data into structured data that can be more easily consumed by an AI model.

Additionally, AWS is previewing multimodal toxicity detection with image support in Amazon Bedrock Guardrails to prevent undesired content from being incorporated into an AI model, in addition to now making generally available a set of tools for tracking data lineage and the ability to apply automated reasoning checks to mathematically prove AI outputs are accurate.

Amazon SageMaker Lakehouse, now generally available, and Amazon Redshift now also support zero-ETL integrations to eliminate the need to copy data between data lakes and databases.

Collectively, these capabilities will reduce the heavy lifting currently associated with customizing AI models, says Sivasubramanian.

Finally, AWS has opened an Amazon Bedrock Marketplace through which organizations can gain access to more than 100 foundation models. In addition, AWS will provide access to AI models from Poolside, Stability.ai and Luma.

It’s not clear how many organizations are relying on AWS to build and deploy AI applications, but the one thing that is certain is there is no shortage of options when it comes to the tools and platforms being made available.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY