Update README.md
Browse files
README.md
CHANGED
@@ -23,3 +23,32 @@ configs:
|
|
23 |
- split: train
|
24 |
path: data/train-*
|
25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
- split: train
|
24 |
path: data/train-*
|
25 |
---
|
26 |
+
|
27 |
+
Extracted lists of pages from PDF resumes and the PDF texts.
|
28 |
+
|
29 |
+
Created using this code:
|
30 |
+
|
31 |
+
```python
|
32 |
+
import io
|
33 |
+
import PIL.Image
|
34 |
+
from datasets import load_dataset
|
35 |
+
|
36 |
+
def render(pdf):
|
37 |
+
images = []
|
38 |
+
for page in pdf.pages:
|
39 |
+
buffer = io.BytesIO()
|
40 |
+
page.to_image(height=840).save(buffer)
|
41 |
+
images.append(PIL.Image.open(buffer))
|
42 |
+
return images
|
43 |
+
|
44 |
+
def extract_text(pdf):
|
45 |
+
return "\n".join(page.extract_text() for page in pdf.pages)
|
46 |
+
|
47 |
+
ds = load_dataset("d4rk3r/resumes-raw-pdf", split="train")
|
48 |
+
ds = ds.map(lambda x: {
|
49 |
+
"images": render(x["pdf"]),
|
50 |
+
"text": extract_text(x["pdf"])
|
51 |
+
}, remove_columns=["pdf"])
|
52 |
+
ds = ds.filter(lambda x: len(x["text"]) > 0)
|
53 |
+
ds.push_to_hub("lhoestq/resumes-raw-pdf-for-ocr")
|
54 |
+
```
|