morcosharcos
fix: README data config
9baf92a
metadata
license: apache-2.0
dataset_info:
  features:
    - name: parent_asin
      dtype: string
    - name: value
      list: float64
    - name: main_category
      dtype: string
    - name: title
      dtype: string
    - name: average_rating
      dtype: float64
    - name: rating_number
      dtype: float64
    - name: description
      dtype: string
    - name: price
      dtype: float64
    - name: categories
      dtype: string
    - name: image_url
      dtype: string
  splits:
    - name: train
      num_bytes: 3482499106
      num_examples: 100000
  download_size: 2309398330
  dataset_size: 3482499106
configs:
  - config_name: 10k
    data_files:
      - split: train
        path: benchmark-10k/*.parquet
  - config_name: 100k
    data_files:
      - split: train
        path: benchmark-100k/*.parquet
  - config_name: 1M
    data_files:
      - split: train
        path: benchmark-1M/*.parquet
  - config_name: 10M
    data_files:
      - split: train
        path: benchmark-10M/*.parquet

Vector Search Benchmarks

This repo contains datasets for benchmarking vector search performance, to help Superlinked prioritize integration partners. For performing actual benchmarking on this dataset, see the github repository README.

Overview

We reviewed a number of publicly available datasets and noted 3 core problems + here is how this dataset fixes them:

Problems of other vector search benchmarks How this dataset solves it
Not enough metadata of various types makes it hard to test filter performance 3 number, 1 categorical, 3 text, 1 image column
Vectors too small, while SOTA models usually output 2k+ even 4k+ dims 4154 dims
Dataset too small, especially if larger vectors are used 100k, 1M and 10M item variants, all sampled from the large dataset

Available Datasets

Product data

The data_dirs contain parquet files with the metadata and vectors.

Dataset Records # Files Size
benchmark_10k 10,000 100 ~230 MB
benchmark_100k 100,000 100 ~2.3 GB
benchmark_1M 1,000,000 100 ~23 GB
benchmark_10M 10,534,536 1000 ~240 GB

The structure of the files is the same throughout:

Schema([('parent_asin', String), # the id
        ('main_category', String),
        ('title', String),
        ('average_rating', Float64),
        ('rating_number', Float64),
        ('description', String),
        ('price', Float64),
        ('categories', String),
        ('image_url', String)])
        ('value', List(Float64)), # the vectors

Data Access

The product metadata and vectors are available using HF Datasets.

from datasets import load_dataset

benchmark_10k = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-10k")
benchmark_100k = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-100k")
benchmark_1M = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-1M")
benchmark_10M = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-10M")

Dataset Production

Source Data

  • Origin: Amazon Reviews 2023 dataset
  • Categories: ["Books", "Automotive", "Tools and Home Improvement", "All Beauty", "Electronics", "Software", "Health and Household"]

Embeddings

The embeddings are created via a superlinked config. The resulting 4154 dim vector contains:

  • 1 categorical,
  • 3 number,
  • 3 text (Qwen/Qwen3-Embedding-0.6B),
  • and 1 image (laion/CLIP-ViT-H-14-laion2B-s32B-b79K)

embeddings concatenated.