Titan Text Embeddings V2 is a Bedrock embedding model for enterprise retrieval: RAG, semantic search, multilingual indexes, and general text or code chunks. It accepts a maximum of 8,192 tokens per input (about 50,000 English characters as a rough guide) and returns a dense vector for cosine or similar similarity search.
You choose 256-, 512-, or 1024-dimensional output. Titan Text Embeddings V2 reports that 512-dimensional vectors keep about 99% of the accuracy of 1024-dimensional vectors, and 256-dimensional vectors keep about 97%, which trims storage and often latency on large indexes.
V2 adds improved unit-vector normalization options for similarity scoring. Titan Text Embeddings V2 was pre-trained on 100+ languages and on code, so one index can cover mixed-language corpora when your evaluation says recall stays high enough.