The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

This is a FAISS vectordb from a Norwegian Wikipedia dump from 2023-09 embedded with NbAiLab/nb-sbert-base.

This can be used to augment a chatbot with RAG (norwegian bokmål language).

Only the article abstracts are processed, but they seemed detailed enough. The 'url' in the metadata points to the original article. Each abstract is embedded as a 768 dimensional vector with this model: NbAiLab/nb-sbert-base

Example usage (Python):


from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
import time

embedder = HuggingFaceEmbeddings(model_name='NbAiLab/nb-sbert-base')

qs = [
  'Kven er Beyonce?',
  'Hva skjedde i 2012?',
  'Hvilke musikkfestivalar kan du anbefale?',
]

db = FAISS.load_local('nowiki_faiss_sbert_all', embedder, allow_dangerous_deserialization=True)

starttime=time.time()

for q in qs :
  print('----\n',q)
  r = db.similarity_search_with_score(q)
  print(r)

print('questions took ',time.time()-starttime,' s. ')

More info about the Wikipedia source:

https://dumps.wikimedia.org/

https://dumps.wikimedia.org/other/enterprise_html/

License and guidelines:

https://dumps.wikimedia.org/legal.html

https://foundation.wikimedia.org/wiki/Legal:Developer_app_guidelines

Embedder model:

https://huggingface.co/NbAiLab/nb-sbert-base

FAISS vectordb:

https://python.langchain.com/docs/integrations/vectorstores/faiss


license: other license_name: wikimedia license_link: https://dumps.wikimedia.org/legal.html

Downloads last month
43