FaheemBEG's picture
Update README.md
5e88c59 verified
metadata
language:
  - fr
tags:
  - france
  - data-catalog
  - data-gouv
  - metadata
  - open-data
  - government
  - etalab
  - embeddings
pretty_name: Data.gouv.fr Datasets Catalog
size_categories:
  - 10K<n<100K
license: etalab-2.0
configs:
  - config_name: latest
    data_files: data/data-gouv-datasets-catalog-latest/*.parquet
    default: true

πŸ‡«πŸ‡· Data.gouv.fr Datasets Catalog

This dataset contains a processed and embedded version of the catalog of datasets published on data.gouv.fr, the French open data platform. The dataset was published by data.gouv.fr on the dedicated dataset page.

It includes rich metadata about each public dataset: title, URL, publisher organization, description, tags, licensing, update frequency, usage metrics, and more. The dataset provides semantic-ready and structured for semantic indexing and retrieval.

Each chunk has been embedded using the BAAI/bge-m3 model, making this catalog ready for search, classification, or retrieval-augmented generation (RAG) pipelines.

πŸ—‚οΈ Dataset Contents

The dataset is provided in Parquet format and contains the following columns:

Column Name Type Description
chunk_id str Unique source based identifier of the chunk
title str Title of the dataset.
doc_id str Document identifier from the source site (slug).
chunk_xxh64 str XXH64 hash of the chunk_text value.
acronym str Acronym of the dataset (if available).
url str URL of the dataset on data.gouv.fr.
organization str Name of the associated organization.
organization_id str Unique ID of the organization.
owner str Name of the dataset owner.
owner_id str Unique ID of the dataset owner.
description str Full description of the dataset.
frequency str Update frequency of the dataset.
license str License type (e.g. Etalab-2.0).
temporal_coverage_start str Start of the temporal coverage (if applicable).
temporal_coverage_end str End of the temporal coverage (if applicable).
spatial_granularity str Spatial granularity level (e.g. country, region, etc.).
spatial_zones str Names of the spatial zone covered.
featured bool Whether the dataset is featured.
created_at str Dataset creation date (standard ISO 8601).
last_modified str Last modification date (standard ISO 8601).
tags str Comma-separated list of tags associated with the dataset.
archived str Whether the dataset is archived.
resources_count int Total number of attached resources.
main_resources_count int Number of primary resources.
resources_formats str Formats used by the dataset's ressources (e.g., CSV, JSON, PDF)
harvest_backend str Name of the harvest backend.
harvest_domain str Domain source of the harvested dataset.
harvest_created_at str Harvest creation date (standard ISO 8601).
harvest_modified_at str Harvest last update date (standard ISO 8601).
harvest_remote_url str Remote source URL of the harvested dataset.
quality_score float Quality score assigned by data.gouv.fr.
metric_discussions int Number of discussions related to the dataset.
metric_reuses int Number of declared reuses.
metric_reuses_by_months str Monthly reuse statistics (as JSON string).
metric_followers int Number of users following the dataset.
metric_followers_by_months str Monthly follower statistics (as JSON string or number).
metric_views int Number of views.
metric_resources_downloads float Number of resource downloads.
chunk_text str Text used for semantic embedding (title + organization + cropped description).
embeddings_bge-m3 str (stringified list) Embedding of chunk_text using BAAI/bge-m3. Stored as JSON array string.

πŸ› οΈ Data Processing Methodology

πŸ“₯ 1. Field Extraction

The original dataset was retrieved directly from the official dedicated dataset page. This dataset only includes data.gouv.fr datasets that have at least a 100 characters description to remove as much noise as possible from incomplete datasets.

βœ‚οΈ 2. Text Chunking

The chunk_text field was created by combining the title, organization name, description.

The description was cropped to a maximum length of +- 1000 characters. The Langchain's RecursiveCharacterTextSplitter function was used to crop the description.

The parameters used are :

  • chunk_size = 1000 (in order to limit as much noise as possible)
  • chunk_overlap = 20
  • length_function = len

Then, only the first splitted text was keeped. Which leads to have a cropped description of a maximum of +- 1000 characters.

🧠 3. Embedding Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.

πŸ“Œ Embeddings Notice

⚠️ The embeddings_bge-m3 column is stored as a stringified list (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the datasets library:

import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/data-gouv-datasets-catalog")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

Otherwise, if you have already downloaded all parquet files from the data/data-gouv-datasets-catalog-latest/ folder :

import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="data-gouv-datasets-catalog-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

πŸ“š Source & License

πŸ”— Source :

πŸ“„ Licence :

Open License (Etalab) β€” This dataset is publicly available and can be reused under the conditions of the Etalab open license.