During the AWS re:Invent generative AI keynote, Amazon announced Bedrock support for Claude 2.1 and Llama 2 70B and more.
After the AWS announcements yesterday about the Amazon Q chatbot for enterprise and powerful new chips for AI workloads, Vice President of Databases, Analytics and Machine Learning at AWS Swami Sivasubramanian took the stage at the AWS re:Invent conference in Las Vegas on Nov. 29 to dive deeper into AWS AI offerings. Sivasubramanian announced new generative AI models coming to Amazon Bedrock, multimodal searching available for Amazon Titan in Amazon Bedrock and many other new enterprise software features and tools related to using generative AI for work.
Amazon Titan can now run searches based on text and images
Amazon Titan Multimodal embeddings are now in general availability in Amazon Bedrock, the AWS tool for building and scaling AI applications. Multimodal embeddings allow organizations to build applications that let users search using text and images for richer search and recommendation options, said Sivasubramanian.
“They (AWS customers) want to enable their customers to search for furniture using a phrase, image or even both,” said Sivasubramanian. “They could use instructions like ‘show me what works well with my sofa’.”
SEE: Are AWS or Google Cloud right for your business? (TechRepublic)
Titan Text Lite and Titan Text Express added to Amazon Bedrock
Titan Text Lite and Titan Text Express are now generally available in Amazon Bedrock to help optimize for accuracy, performance and cost, depending on their use cases. Titan Text Lite is a very small model for text and can be fine-tuned. Titan Text Express is a model that can do a wider range of text-based generative AI tasks, such as conversational chat and open-ended questions.
Titan Image Generator (Figure A) is now available in public preview in the U.S. It can be used to create images using natural language prompts. Organizations can customize images with proprietary data to match their industry and brand. Images will be invisibly watermarked by default to help avoid disinformation.
Claude 2.1 and Llama 2 70B now hosted on Amazon Bedrock
Amazon Bedrock will now support Anthropic’s Claude 2.1 for users in the U.S. This version of the Claude generative AI offers advancements in a 20,000 context window, improved accuracy, 50% fewer hallucinations even during adversarial prompt attacks and two times reduction in false statements in open-ended conversations compared to Claude 2. Tool use for function calling and workflow orchestration in Claude 2.1 are available in beta for select early access partners.
Meta’s Llama 2 70B, a public large language model fine-tuned for chat-based use cases and large-scale tasks, is available today in Amazon Bedrock.
Claude assistance available in AWS Generative AI Innovation Center
The AWS Generative AI Innovation Center will expand early in 2024 with a custom model program for Anthropic Claude. The AWS Generative AI Innovation Center is designed to help people work with AWS’ team of experts to customize Claude needs for one’s own proprietary business data.
Additional Amazon Q use cases announced
Sivasubramanian announced a preview of Amazon Q, the AWS natural language chatbot, in Amazon Redshift, which can provide help with writing SQL. Amazon Redshift with Amazon Q lets developers ask natural language questions, which the AI translates into a SQL query. Then, they can run that query and adjust it as necessary.
Plus, Amazon Q for data integration pipelines is now available on the serverless computing platform AWS Glue for building data integration jobs in natural language.
Training and model evaluation tools added to Amazon SageMaker
Sivasubramanian announced the general availability of SageMaker HyperPod, a new distributed generative AI training capability to reduce model training time up to 40%. SageMaker HyperPod can train generative AI models on its own for weeks or months, automating the tasks of splitting data into chunks and loading that data onto individual chips in a training cluster. SageMaker HyperPod includes SageMaker’s distributed training pods, managed checkpoints for optimization, the ability to detect and reroute around hardware failures. Other new SageMaker features include SageMaker inference for faster optimization and a new user experience in SageMaker Studio.
Amazon SageMaker and Bedrock now have Model Evaluation, which lets customers assess different foundation models to find which is the best for their use case. Model Evaluation is available in preview.
Vector capabilities and data management tools added to many AWS services
Sivasubramanian announced more new tools around vectors and data management that are suitable for a variety of enterprise use cases, including generative AI.
- Vector Engine for OpenSearch Serverless is now generally available.
- Vector capabilities are coming to Amazon DocumentDB and Amazon DynamoDB (out now in all regions where Amazon DocumentDB is available) and Amazon MemoryDB for Redis (now in preview).
- Amazon Neptune Analytics, an analytics database engine for Amazon Neptune or Amazon S3, is available today in certain regions.
- Amazon OpenSearch service zero-ETL integration with Amazon S3.
- AWS Clean Rooms ML, which lets organizations share machine learning models with partners without sharing their underlying data.
“While gen AI still needs a strong foundation, we can also use this technology to address some of the big challenges in data management, like making data easier to use, making it more intuitive and making data more valuable,” Sivasubramanian said.
Note: TechRepublic is covering AWS re:Invent virtually.