thebajajra commited on
Commit
49077f8
·
verified ·
1 Parent(s): 7259eb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -5
README.md CHANGED
@@ -37,23 +37,22 @@ size_categories:
37
 
38
  # Ecom-niverse
39
 
40
- ## Why
41
 
42
  Generic web-crawled corpora often lack the focused coverage of domain knowledge and unique formats found in specialized fields, such as e-commerce. As a result, models pre-trained only on general data may lack essential retail knowledge and struggle with the semi-structured text formats common in e-commerce
43
 
44
- ## What
45
 
46
  We construct a comprehensive e-commerce tokens dataset by refining a broad web dataset to isolate content with retail or shopping context. This curated corpus is intended for continual pre-training of LLMs and other Encoder-only models so they better understand product descriptions, prices, and other commerce-related text
47
 
48
 
49
- ## How
50
 
51
  Our starting point is the FineFineWeb dataset – an open-source web-scale corpus that organizes CommonCrawl web data into fine-grained topical domains. FineFineWeb consists of over 4.4 trillion tokens of English web text categorized into ~50 domains
52
 
53
  We leverage FineFineWeb as the raw data source. Each entry in FineFineWeb is a text snippet (document or paragraph) accompanied by metadata including its assigned domain label. Not all these domains are relevant to retail commerce, so the first step is to identify which domains likely contain e-commerce content.
54
 
55
- By focusing on these domains, we narrow the search space to parts of the web data where shopping-related text is likely to appear. However, even within a chosen domain, not every item is actually about buying or selling – many may be informational articles, news, or unrelated discussions. Thus, a more fine-grained filtering within each domain is required to extract only the e-commerce-specific lines. We accomplish this by training lightweight classifiers per domain to distinguish e-commerce context vs. non-e-commerce content.
56
-
57
  We identified 9 E-commerce overlapping domains which have significant amount of relevant tokens but required filteration. Below is the domain list and their filtered size
58
  | Domain | Size (GBs) |
59
  |---|---|
@@ -77,3 +76,18 @@ Additionally, there are 6 more domains which had almost complete overlap and wer
77
  | Photo | 15 |
78
  | Painting | 2 |
79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  # Ecom-niverse
39
 
40
+ ## Need for E-commerce pre-training Dataset
41
 
42
  Generic web-crawled corpora often lack the focused coverage of domain knowledge and unique formats found in specialized fields, such as e-commerce. As a result, models pre-trained only on general data may lack essential retail knowledge and struggle with the semi-structured text formats common in e-commerce
43
 
44
+ ## What is Ecom-niverse
45
 
46
  We construct a comprehensive e-commerce tokens dataset by refining a broad web dataset to isolate content with retail or shopping context. This curated corpus is intended for continual pre-training of LLMs and other Encoder-only models so they better understand product descriptions, prices, and other commerce-related text
47
 
48
 
49
+ ## Data Curation Methodology
50
 
51
  Our starting point is the FineFineWeb dataset – an open-source web-scale corpus that organizes CommonCrawl web data into fine-grained topical domains. FineFineWeb consists of over 4.4 trillion tokens of English web text categorized into ~50 domains
52
 
53
  We leverage FineFineWeb as the raw data source. Each entry in FineFineWeb is a text snippet (document or paragraph) accompanied by metadata including its assigned domain label. Not all these domains are relevant to retail commerce, so the first step is to identify which domains likely contain e-commerce content.
54
 
55
+ ### Domains
 
56
  We identified 9 E-commerce overlapping domains which have significant amount of relevant tokens but required filteration. Below is the domain list and their filtered size
57
  | Domain | Size (GBs) |
58
  |---|---|
 
76
  | Photo | 15 |
77
  | Painting | 2 |
78
 
79
+ By focusing on these domains, we narrow the search space to parts of the web data where shopping-related text is likely to appear. However, even within a chosen domain, not every item is actually about buying or selling, many may be informational articles, news, or unrelated discussions. Thus, a more fine-grained filtering within each domain is required to extract only the e-commerce-specific lines. We accomplish this by training lightweight classifiers per domain to distinguish e-commerce context vs. non-e-commerce content.
80
+
81
+ ### Data Filtering
82
+ To train a classifier for each domain, we first needed labeled examples of what constitutes “e-commerce context” in that domain. Manually labeling thousands of samples across 15+ domains would be tedious and time-consuming. Instead, we adopted a semi-automatic approach using a powerful large language model (LLM) as an annotator. Specifically, we used the Phi-4 model to help label data.
83
+ For each selected domain, we sampled a subset of text items (few hunderd thousand samples) from FineFineWeb. These samples were chosen to cover a variety of content within the domain. We then prompted the Phi-4 to evaluate each sample and decide whether it represents an e-commerce/shopping context or not. The prompt was designed to define e-commerce context (for example: “Does this text relate to buying or selling products, retail, or shopping? Respond with a label: ECOMMERCE or NOT ECOMMERCE.”). We instructed the LLM to output the classification in a structured format (either as a JSON snippet or a simple Markdown list) for easy parsing.
84
+
85
+ It’s important to ensure the LLM’s labeling is reliable. We found Phi-4 to be quite accurate upon verifying samples of annotated data using Llama3-70B.
86
+
87
+ #### Domain-specific Classifiers
88
+ With annotated samples in hand, the next step was to train a binary classifier for each domain to generalize the labeling to all data in that domain. We chose fastText, a lightweight text classification library from Facebook AI Research, for this task.
89
+ FastText is well-suited for large-scale text filtering because of its speed and scalability. It often achieves accuracy on par with deep neural networks, while training in a fraction of the time.
90
+
91
+ FastText has been successfully used in other data-curation pipelines; Qwen used a fastText classifier to curate pretraining data. We also note that IBM’s recent GneissWeb pipeline combined multiple fastText filters to efficiently identify high-quality documents, showing the method’s robustness when guided by good training data
92
+
93
+ Before deploying these classifiers on the full data, we performed quick validations. We held out a portion of the LLM-labeled data (or did cross-validation) to ensure the fastText model was accurately capturing the LLM’s judgments. In most cases, we observed high accuracy (many domains saw >80–90% precision on the held-out set, since the categories are fairly distinct).