PDF to TXT converter ready to chunck for your RAG
ONLY WINDOWS
EXE and PY available (en and german)
better input = better output
⇨ give me a ❤️, if you like ;)
...
Most LLM applications only convert your PDF simple to txt, nothing more, its like you save your PDF as txt file. Blocks of text that are close together are often mixed up and tables cannot be read logically.
Therefore its better to convert it with some help of a "Parser". The embedder can now find a better context.
I work with "pdfplumber/pdfminer" none OCR, so its very fast!
- Works with single and multi pdf list, works with folder
- Intelligent multiprocessing
- Error tolerant, that means if your PDF is not convertible, it will be skipped, no special handling
- Instant view of the result, hit one pdf on top of the list
- Converts some common tables as json inside the txt file
- It adds the absolute PAGE number to each page
- All txt files will be created in original folder of PDF
- All previous txt files are overwritten
This I have created with my brain and the help of chatGPT, Iam not a coder... sorry so I will not fulfill any wishes unless there are real errors.
It is really hard for me with GUI and the Function and in addition to compile it.
For the python-file oc you need to import missing libraries.
...
I also have a "docling" parser with OCR (GPU is need for fast processing), its only be a python-file, not compiled.
You have to download all libs, and if you start (first time) internal also OCR models are downloaded. At the moment i have prepared a kind of multi docling, the number of parallel processed PDFs depend on VRAM and if you use OCR only for tables or for all. I have set VRAM = 16GB (my GPU RAM, you should set yours) and the multiple calls for docling are VRAM/1.3, so it uses ~12GB (in my version) and processes 12 PDFs at once, only txt and tables are converted, so no images no diagrams (to process pages in parallel its to complicate). For now all PDFs must be same folder like the python file. If you change OCR for all the VRAM consum is rasing you have to set 1.3 to 2 or more.
now have fun and leave a comment if you like ;)
on discord "sevenof9"
my embedder collection:
https://huggingface.co/kalle07/embedder_collection
I am not responsible for any errors or crashes on your system. If you use it, you take full responsibility!
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support