AI21 Labs Jamba-Instruct model is now available in Amazon Bedrock
We are excited to announce the availability of the Jamba-Instruct large language model (LLM) in Amazon Bedrock… What is Jamba-Instruct Jamba-Instruct is an instruction-tuned version of the Jamba base model, previously open sourced by AI21 Labs, which combines a production grade-model, Structured…
We are excited to announce the availability of the Jamba-Instruct large language model (LLM) in Amazon Bedrock. Jamba-Instruct is built by AI21 Labs, and most notably supports a 256,000-token context window, making it especially useful for processing large documents and complex Retrieval Augmented Generation (RAG) applications.
What is Jamba-Instruct
Jamba-Instruct is an instruction-tuned version of the Jamba base model, previously open sourced by AI21 Labs, which combines a production grade-model, Structured State Space (SSM) technology, and Transformer architecture. With the SSM approach, Jamba-Instruct is able to achieve the largest context window length in its model size class while also delivering the performance traditional transformer-based models provide. These models yield a performance boost over AI21’s previous generation of models, the Jurassic-2 family of models. For more information about the hybrid SSM/Transformer architecture, refer to the Jamba: A Hybrid Transformer-Mamba Language Model whitepaper.
Get started with Jamba-Instruct
To get started with Jamba-Instruct models in Amazon Bedrock, first you need to get access to the model.
- On the Amazon Bedrock console, choose Model access in the navigation pane.
- Choose Modify model access.
- Select the AI21 Labs models you want to use and choose Next.
- Choose Submit to request model access.
For more information, refer to Model access.
Next, you can test the model either in the Amazon Bedrock Text or Chat playground.
Example use cases for Jamba-Instruct
Jamba-Instruct’s long context length is particularly well-suited for complex Retrieval Augmented Generation (RAG) workloads, or potentially complex document analysis. For example, it would be suitable for detecting contradictions between different documents or analyzing one document in the context of another. The following is an example prompt suitable for this use case:
You can also use Jamba for query augmentation, a technique where an original query is transformed into related queries, for purposes of optimizing RAG applications. For example:
You can also use Jamba for standard LLM operations, such as summarization and entity extraction.
Prompt guidance for Jamba-Instruct can be found in the AI21 model documentation. For more information about Jamba-Instruct, including relevant benchmarks, refer to Built for the Enterprise: Introducing AI21’s Jamba-Instruct Model.
Programmatic access
You can also access Jamba-Instruct through an API, using Amazon Bedrock and AWS SDK for Python (Boto3). For installation and setup instructions, refer to the quickstart. The following is an example code snippet:
Conclusion
AI2I Labs Jamba-Instruct in Amazon Bedrock is well-suited for applications where a long context window (up to 256,000 tokens) is required, like producing summaries or answering questions that are grounded in long documents, avoiding the need to manually segment documents sections to fit the smaller context windows of other LLMs. The new SSM/Transformer hybrid architecture also provides benefits in model throughput. It can provide a performance boost of up to three times more tokens per second for context window lengths exceeding 128,000 tokens, compared to other models in similar size class.
AI2I Labs Jamba-Instruct in Amazon Bedrock is available in the US East (N. Virginia) AWS Region and can be accessed in on-demand consumption model. To learn more, refer to and Supported foundation models in Amazon Bedrock. To get started with AI2I Labs Jamba-Instruct in Amazon Bedrock, visit the Amazon Bedrock console.
About the Authors
Joshua Broyde, PhD, is a Principal Solution Architect at AI21 Labs. He works with customers and AI21 partners across the generative AI value chain, including enabling generative AI at an enterprise level, using complex LLM workflows and chains for regulated and specialized environments, and using LLMs at scale.
Fernando Espigares Caballero is a Senior Partner Solutions Architect at AWS. He creates joint solutions with strategic Technology Partners to deliver value to customers. He has more than 25 years of experience working in IT platforms, data centers, and cloud and internet-related services, holding multiple Industry and AWS certifications. He is currently focusing on generative AI to unlock innovation and creation of novel solutions that solve specific customer needs.
Author: Joshua Broyde