bling-tiny-llama-npu-ov
bling-tiny-llama-npu-ov is a very small, very fast fact-based question-answering model, designed for retrieval augmented generation (RAG) with complex business documents, quantized and packaged in OpenVino int4 for AI PCs using Intel NPU.
This model is one of the smallest and fastest in the series. For higher accuracy, look at larger models in the BLING/DRAGON series.
Model Description
- Developed by: llmware
- Model type: tinyllama
- Parameters: 1.1 billion
- Quantization: int4
- Model Parent: llmware/bling-tiny-llama-v0
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: Fact-based question-answering, RAG
- RAG Benchmark Accuracy Score: 86.5
Model Card Contact
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model authors have turned it off explicitly.
Model tree for llmware/bling-tiny-llama-npu-ov
Base model
llmware/bling-tiny-llama-v0