BIG FAN OF THE READER API

#1
by Svngoku - opened

Incredible work by the team! Thanks for the work, the model is good and the notebook very informative! The quantize version of the model will be more accessible!

Jina AI org

I thought the advances from quantization on 1.5B parameters are limited. But why not, it is an interesting idea to see what's we can get from quantization.

Jina AI org

@Svngoku oh, I just observed you have take a shot https://huggingface.co/Svngoku/ReaderLM-v2-Q8_0-GGUF How do you feel?

@Svngoku oh, I just observed you have take a shot https://huggingface.co/Svngoku/ReaderLM-v2-Q8_0-GGUF How do you feel?

Yes indeed, I quantized the model to 8-bit GGUF, and tested it using the same notebook, and it works fine but still consumes just as much RAM. In terms of time, execution takes about 4 min on a L4 high RAM (22.5 GB) for 28984 generated tokens.

Code

!wget https://huggingface.co/Svngoku/ReaderLM-v2-Q8_0-GGUF/resolve/main/readerlm-v2-q8_0.gguf
from vllm import LLM

llm = LLM(
    model="/content/readerlm-v2-q8_0.gguf",
    max_model_len=max_model_len,
    tokenizer='jinaai/ReaderLM-v2'
)
Jina AI org

For the long-context job, the model quantization would bring limited advances. Since the kv cache takes the most of the vRAM. Thus, I think the kv cache quantization would be a good fit in this case.

Sign up or log in to comment