manzked commited on
Commit
abc585f
·
verified ·
1 Parent(s): d57cb53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -7,11 +7,12 @@ pretty_name: 1m huffpost news article links
7
  size_categories:
8
  - 100K<n<1M
9
  ---
 
10
  The dataset consists of >1m links to huffpost news articles. They have been crawled for an educational approach to learn how content could be gathered through reading the websites sitemap(s).
11
 
12
  The code can be found here [Link](https://github.com/manzke/huffpost-crawler/)
13
 
14
- # huffpost-crawler
15
  educational - how to use sitemap.xml for crawling a website (huffpost.com - could be any)
16
 
17
  Every public website has (or should have) a sitemap.xml. The sitemap.xml allows robots like Google to find links, which should be crawled.
@@ -33,7 +34,7 @@ What does our crawler do?
33
 
34
  This led to 1,121,569 unique links for new entries.
35
 
36
- ## robots.txt
37
 
38
  A lot of LLM models have been trained on public data. To forbit it, pages has added
39
 
@@ -88,7 +89,7 @@ Sitemap: https://www.huffpost.com/sitemaps-huffingtonpost/sitemap.xml
88
  Sitemap: https://www.huffpost.com/sitemaps-huffingtonpost/sections.xml
89
  ```
90
 
91
- ## Inspired by
92
  ```
93
  @article{misra2019sarcasm,
94
  title={Sarcasm Detection using Hybrid Neural Network},
 
7
  size_categories:
8
  - 100K<n<1M
9
  ---
10
+ # huffpost news article dataset
11
  The dataset consists of >1m links to huffpost news articles. They have been crawled for an educational approach to learn how content could be gathered through reading the websites sitemap(s).
12
 
13
  The code can be found here [Link](https://github.com/manzke/huffpost-crawler/)
14
 
15
+ ## huffpost-crawler
16
  educational - how to use sitemap.xml for crawling a website (huffpost.com - could be any)
17
 
18
  Every public website has (or should have) a sitemap.xml. The sitemap.xml allows robots like Google to find links, which should be crawled.
 
34
 
35
  This led to 1,121,569 unique links for new entries.
36
 
37
+ ### robots.txt
38
 
39
  A lot of LLM models have been trained on public data. To forbit it, pages has added
40
 
 
89
  Sitemap: https://www.huffpost.com/sitemaps-huffingtonpost/sections.xml
90
  ```
91
 
92
+ ### Inspired by
93
  ```
94
  @article{misra2019sarcasm,
95
  title={Sarcasm Detection using Hybrid Neural Network},