Vector features represent data as high-dimensional numerical arrays, enabling AI models to understand complex relationships and patterns.
Vector databases are crucial in the AI era for storing and retrieving these embeddings efficiently, powering applications like semantic search, recommendation systems, and generative AI.
Here are the popular databases and its vectors features. Hope this is useful
| Database | Vector Support | Notes | License / Free | Features , Limitations, Integration Details | Implementation Steps |
| Oracle | β Native support | Oracle supports vector search natively, enabling semantic queries alongside structured data | Licensed | Oracle Database 23ai introduces robust vector support through: SQL Functions: VECTOR_CHUNKS for text chunking and VECTOR_EMBEDDING for generating embeddings directly in SQL. PL/SQL Packages: DBMS_VECTOR and DBMS_VECTOR_CHAIN for embedding generation, text processing, and similarity search. These can be scheduled as end-to-end pipelines. Embedding Models: Supports ONNX format models and third-party REST APIs for embedding generation | BEGIN DBMS_VECTOR_CHAIN.CREATE_MODEL( model_name => 'my_embedding_model', model_path => '/path/to/model.onnx' ); END; SELECT VECTOR_EMBEDDING('my_embedding_model', 'sample text') FROM DUAL; Similarity Search: SELECT * FROM my_table WHERE VECTOR_DISTANCE(my_vector_column, VECTOR_EMBEDDING('model', 'query text')) < 0.5; |
| MySQL | β Native support | MySQL has added vector data types in recent versions, allowing integrated semantic search | Free | MySQL 9.0 introduces the VECTOR(N) data type: Structure: Stores up to 16,383 single-precision floats. Functions: Includes STRING_TO_VECTOR(), VECTOR_TO_STRING(), VECTOR_DIM(), and DISTANCE() for similarity calculations. Limitations: Cannot be used as keys, in aggregate functions, or with most numeric and JSON functions. Cloud SQL Extension: Google Cloud SQL for MySQL supports vector embeddings via VARBINARY columns and ANN/KNN indexing with types like TREE_SQ, TREE_AH, and BRUTE_FORCE | CREATE TABLE vectors ( id INT PRIMARY KEY, embedding VECTOR(768) ); INSERT INTO vectors VALUES (1, STRING_TO_VECTOR('[0.1, 0.2, ..., 0.768]')); Similarity Search: SELECT id FROM vectors WHERE DISTANCE(embedding, STRING_TO_VECTOR('[0.1, 0.2, ..., 0.768]'), 'COSINE') < 0.2; |
| MSSQL | π§ Limited/Custom | No native vector support yet; can be emulated via extensions or external libraries. | Licensed | Status: No native vector data type. Workaround: Embeddings can be stored as binary or JSON and similarity search implemented via external libraries or integration with vector databases like FAISS or Milvus. | Workaround: Store embeddings as VARBINARY(MAX) or JSON. Use CLR or external Python/FAISS for similarity search. |
| PostgreSQL | β Via Extensions | Supports vector search using extensions like pgvector; widely adopted for AI workloads. | Free | Extension: pgvector is the primary method. Capabilities: Supports vector storage, similarity search (cosine, L2, inner product), and indexing via HNSW or IVF. Integration: Widely used in AI pipelines and supports ONNX models via external tools. | CREATE EXTENSION IF NOT EXISTS vector; CREATE TABLE items ( id SERIAL PRIMARY KEY, embedding VECTOR(768) ); SELECT * FROM items ORDER BY embedding <-> '[0.1, 0.2, ..., 0.768]' LIMIT 5; |
| Cassandra | β Through Astra Vector | Astra DB (based on Cassandra) supports vector search with DiskANN indexing | Licensed | Astra DB: DataStaxβs managed Cassandra supports vector search using DiskANN indexing. Implementation: Embeddings stored as arrays; similarity search via ANN algorithms. | Store vectors as arrays in a column. Use Astraβs vector search API: { "vector": [0.1, 0.2, ..., 0.768], "top_k": 5 } |
| Elasticsearch | β Native support | Offers vector search via kNN and ANN indexing; widely used for hybrid search | Free | Native Support: Uses dense_vector and knn_vector fields. Indexing: Supports HNSW for ANN search. Querying: knn search queries with cosine, dot product, or Euclidean distance. | Supports dense_vector and knn_vector. Mapping: { "properties": { "embedding": { "type": "dense_vector", "dims": 768, "index": true, "similarity": "cosine" } } } Search: { "knn": { "embedding": { "vector": [0.1, 0.2, ..., 0.768], "k": 5 } } } |
| MongoDB | β Native in Atlas | MongoDB Atlas includes full-featured vector search for generative AI and semantic applications | Free | Atlas Vector Search: Native support in MongoDB Atlas. Indexing: Uses HNSW for fast ANN search. Integration: Embeddings stored as arrays; supports OpenAI and HuggingFace models | Store Vector as Arrays, Create HNSW Index { "name": "vector_index", "fields": [{"path": "embedding", "type": "vector"}], "type": "vectorSearch", "algorithm": "HNSW" } Query: { "$vectorSearch": { "queryVector": [0.1, 0.2, ..., 0.768], "path": "embedding", "numCandidates": 100, "limit": 5 } } |
| Redis | β Via Redis Vector | Redis supports vector search through modules like Redisearch and Redis Vector. | Free | Modules: Redisearch and Redis Vector. Storage: Embeddings stored as vectors in Redis keys. Search: Supports cosine, dot product, and Euclidean distance. | Store vectors and hashes as Json Create Index FT.CREATE idx ON HASH PREFIX 1 doc: SCHEMA embedding VECTOR FLAT 6 DIM 768 DISTANCE_METRIC COSINE Query: FT.SEARCH idx "*=>[KNN 5 @embedding $vec]" PARAMS 2 vec "[0.1, 0.2, ..., 0.768]" DIALECT 2 |
| Snowflake | β Native support | Snowflake supports vector embeddings and retrieval-augmented generation (RAG) use cases | Licensed | Native Support: Embedding storage and similarity search. Functions: Includes vector distance functions and supports RAG pipelines. Integration: Works with external models and Snowpark for ML. | Store vectors as arrays. Use UDFs for similarity: CREATE FUNCTION cosine_similarity(vec1 ARRAY, vec2 ARRAY) RETURNS FLOAT LANGUAGE SQL AS '...'; Query: SELECT * FROM embeddings WHERE cosine_similarity(embedding, ARRAY_CONSTRUCT(0.1, 0.2, ..., 0.768)) > 0.8; |
| Redshift | π§ Limited | No native vector support; can be integrated with external vector stores or used with UDFs. | Licensed | Status: No native vector support. Workaround: Embeddings stored as arrays or JSON; similarity search via UDFs or integration with external vector stores. | Store vectors as arrays or JSON. Use UDFs or external services for similarity. |
| Vertica | π§ Limited | Vertica does not natively support vector data types or similarity search functions like cosine or inner product out of the box. However, it offers workarounds and integrations that allow you to simulate vector database behavior: | Licensed | You can store embeddings using the ARRAY data type. You can implement distance metrics using User-Defined Functions (UDFs) or User-Defined Extensions (UDXs). Vertica integrates with external AI frameworks and supports Python-based ML workflows via VerticaPy. | CREATE TABLE embeddings ( id INT, vector ARRAY[FLOAT] ); from verticapy.learn import RandomForestRegressor model = RandomForestRegressor().fit("training_table") predictions = model.predict("test_table") SELECT sentiment_udx(text_column) FROM reviews; |
| Neo4J | β Native support | starting with Neo4j 5.11 (beta) and 5.13 (GA), it introduced vector indexes that enable similarity search and AI-driven analytics. These indexes allow you to store and query vector embeddings directly within graph nodes or relationships | Licensed | starting with Neo4j 5.11 (beta) and 5.13 (GA), it introduced vector indexes that enable similarity search and AI-driven analytics. These indexes allow you to store and query vector embeddings directly within graph nodes or relationships Vector Indexes: Store embeddings as LIST<FLOAT> properties. Similarity Search: Supports cosine and Euclidean distance. Powered by: Apache Lucene indexing engine. | CREATE (m:Movie {title: "The Godfather", embedding: [0.0059, -0.0381, ..., 0.0188]}); CREATE VECTOR INDEX movie_embedding_index FOR (m:Movie) ON (m.embedding) OPTIONS {indexConfig: {similarityFunction: 'cosine'}}; CALL db.index.vector.queryNodes('movie_embedding_index', [0.0059, -0.0381, ..., 0.0188], 5) YIELD node, score RETURN node.title, score; |
Follow Me!!!