dataset_name
string | local_dir
string | dim
int64 | nlist
int64 | opq_m
int64 | pq_m
int64 | pq_bits
int64 | nprobe
int64 | faiss_version
string | timestamp
timestamp[s] | trained
bool | ntotal
int64 | offset
int64 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
maloyan/wikipedia-22-12-en-embeddings-all-MiniLM-L6-v2
|
/aiau011_scratch/szg0148/home/RL_Project/RL_Feedback_Project/Data/wikipedia_ivfpq_gpu_only/wiki_emb_local
| 384 | 262,144 | 64 | 64 | 8 | 128 |
1.12.0
| 2025-09-27T01:29:18 | true | 35,167,920 | 35,167,920 |
dataset_info: features: - name: text dtype: string - name: embeddings dtype: float32 shape: [384] configs: - config_name: default data_files: "*.parquet"
Wikipedia IVF-OPQ-PQ Vector Database (GPU-Optimized)
A high-performance, GPU-accelerated FAISS vector database built from Wikipedia articles with pre-computed embeddings. This dataset contains approximately 35 million Wikipedia articles with 384-dimensional embeddings using the all-MiniLM-L6-v2
model.
Dataset Overview
This vector database uses advanced compression techniques (IVF + OPQ + PQ) to provide fast similarity search over Wikipedia content while maintaining high recall. The database is optimized for Retrieval Augmented Generation (RAG) applications and large-scale semantic search.
Key Features:
- GPU-accelerated FAISS index with IVF, OPQ, and Product Quantization
- SQLite text storage with aligned vector IDs
- Memory-efficient compression (~64 bytes per vector)
Dataset Structure
wikipedia_vector_index_DB/ βββ index.faiss # Main FAISS index (CPU-serialized) βββ meta.json # Index metadata and parameters βββ docs.sqlite # Text storage (rowid = vector id) βββ docs.sqlite-wal # SQLite WAL file (if present) βββ docs.sqlite-shm # SQLite shared memory (if present)
File Descriptions
index.faiss
: Complete FAISS index containing trained OPQ matrices, IVF centroids, PQ codebooks, and compressed vector codesmeta.json
: Checkpoint metadata including offset, ntotal, dimensions, and compression parametersdocs.sqlite
: SQLite database with schemadocs(id INTEGER PRIMARY KEY, text TEXT)
whereid
matches FAISS vector IDs*.parquet
: Original embedding data in Parquet format for verification and rebuilding
Technical Specifications
Parameter | Value | Description |
---|---|---|
Vectors | ~35M | Total number of Wikipedia articles |
Dimensions | 384 | Embedding dimensionality (all-MiniLM-L6-v2) |
Index Type | IVF-OPQ-PQ | Inverted File + Optimized Product Quantization |
Compression | ~64 bytes/vector | Memory-efficient storage |
nlist | 131k-262k | Number of IVF clusters |
OPQ | 64 subspaces | Optimized rotation matrix |
PQ | 64Γ8 bits | Product quantization parameters |
Usage
Quick Start
from huggingface_hub import snapshot_download
import faiss
import sqlite3
import json
# Download the complete vector database
dataset_path = snapshot_download(
repo_id="your-username/wikipedia-vector-db",
repo_type="dataset",
cache_dir="./data"
)
# Load FAISS index
index = faiss.read_index(f"{dataset_path}/index.faiss")
# Load metadata
with open(f"{dataset_path}/meta.json", "r") as f:
meta = json.load(f)
# Connect to text database
conn = sqlite3.connect(f"{dataset_path}/docs.sqlite")
print(f"Loaded index with {index.ntotal:,} vectors")
print(f"Index dimension: {index.d}")
###GPU Accelerated
import faiss
# Move index to GPU for faster queries
res = faiss.StandardGpuResources()
gpu_index = faiss.index_cpu_to_gpu(res, 0, index)
# Set search parameters
gpu_index.nprobe = 128 # Higher = better recall, slower search
# Perform similarity search
query_vector = get_query_embedding("your search query") # Shape: (1, 384)
distances, indices = gpu_index.search(query_vector, k=10)
# Retrieve corresponding text
cursor = conn.cursor()
for idx in indices[0]:
result = cursor.execute("SELECT text FROM docs WHERE id = ?", (int(idx),)).fetchone()
if result:
print(f"ID {idx}: {result[0][:200]}...")
Original Dataset
This vector database is built from maloyan/wikipedia-22-12-en-embeddings-all-MiniLM-L6-v2, which contains pre-computed embeddings of Wikipedia articles using the sentence-transformers/all-MiniLM-L6-v2 model.
- Downloads last month
- 30