Ecom-niverse / README.md
thebajajra's picture
Update README.md
49077f8 verified
metadata
license: apache-2.0
task_categories:
  - token-classification
  - text-generation
  - fill-mask
  - text-classification
language:
  - en
tags:
  - ecommerce
  - e-commerce
  - retail
  - marketplace
  - shopping
  - amazon
  - ebay
  - alibaba
  - google
  - rakuten
  - bestbuy
  - walmart
  - flipkart
  - wayfair
  - shein
  - target
  - etsy
  - shopify
  - taobao
  - asos
  - carrefour
  - costco
  - overstock
size_categories:
  - 100B<n<1T

Ecom-niverse

Need for E-commerce pre-training Dataset

Generic web-crawled corpora often lack the focused coverage of domain knowledge and unique formats found in specialized fields, such as e-commerce. As a result, models pre-trained only on general data may lack essential retail knowledge and struggle with the semi-structured text formats common in e-commerce

What is Ecom-niverse

We construct a comprehensive e-commerce tokens dataset by refining a broad web dataset to isolate content with retail or shopping context. This curated corpus is intended for continual pre-training of LLMs and other Encoder-only models so they better understand product descriptions, prices, and other commerce-related text

Data Curation Methodology

Our starting point is the FineFineWeb dataset – an open-source web-scale corpus that organizes CommonCrawl web data into fine-grained topical domains. FineFineWeb consists of over 4.4 trillion tokens of English web text categorized into ~50 domains

We leverage FineFineWeb as the raw data source. Each entry in FineFineWeb is a text snippet (document or paragraph) accompanied by metadata including its assigned domain label. Not all these domains are relevant to retail commerce, so the first step is to identify which domains likely contain e-commerce content.

Domains

We identified 9 E-commerce overlapping domains which have significant amount of relevant tokens but required filteration. Below is the domain list and their filtered size

Domain Size (GBs)
Hobby 114
News 66
Health 66
Entertainment 64
Travel 52
Food 22
Automotive 19
Sports 12
Music and Dance 7

Additionally, there are 6 more domains which had almost complete overlap and were picked directly out of FineFineWeb.

Domain Size (GBs)
Fashion 37
Beauty 37
Celebrity 28
Movie 26
Photo 15
Painting 2

By focusing on these domains, we narrow the search space to parts of the web data where shopping-related text is likely to appear. However, even within a chosen domain, not every item is actually about buying or selling, many may be informational articles, news, or unrelated discussions. Thus, a more fine-grained filtering within each domain is required to extract only the e-commerce-specific lines. We accomplish this by training lightweight classifiers per domain to distinguish e-commerce context vs. non-e-commerce content.

Data Filtering

To train a classifier for each domain, we first needed labeled examples of what constitutes “e-commerce context” in that domain. Manually labeling thousands of samples across 15+ domains would be tedious and time-consuming. Instead, we adopted a semi-automatic approach using a powerful large language model (LLM) as an annotator. Specifically, we used the Phi-4 model to help label data. For each selected domain, we sampled a subset of text items (few hunderd thousand samples) from FineFineWeb. These samples were chosen to cover a variety of content within the domain. We then prompted the Phi-4 to evaluate each sample and decide whether it represents an e-commerce/shopping context or not. The prompt was designed to define e-commerce context (for example: “Does this text relate to buying or selling products, retail, or shopping? Respond with a label: ECOMMERCE or NOT ECOMMERCE.”). We instructed the LLM to output the classification in a structured format (either as a JSON snippet or a simple Markdown list) for easy parsing.

It’s important to ensure the LLM’s labeling is reliable. We found Phi-4 to be quite accurate upon verifying samples of annotated data using Llama3-70B.

Domain-specific Classifiers

With annotated samples in hand, the next step was to train a binary classifier for each domain to generalize the labeling to all data in that domain. We chose fastText, a lightweight text classification library from Facebook AI Research, for this task. FastText is well-suited for large-scale text filtering because of its speed and scalability. It often achieves accuracy on par with deep neural networks, while training in a fraction of the time.

FastText has been successfully used in other data-curation pipelines; Qwen used a fastText classifier to curate pretraining data. We also note that IBM’s recent GneissWeb pipeline combined multiple fastText filters to efficiently identify high-quality documents, showing the method’s robustness when guided by good training data

Before deploying these classifiers on the full data, we performed quick validations. We held out a portion of the LLM-labeled data (or did cross-validation) to ensure the fastText model was accurately capturing the LLM’s judgments. In most cases, we observed high accuracy (many domains saw >80–90% precision on the held-out set, since the categories are fairly distinct).