Genearted from https://github.com/yhyu13/AutoGPTQ.git branch cuda_dev
Original weight: https://huggingface.co/tiiuae/falcon-7b
Note this is the quantization of the base model, where base model is not fined-tuned with chat instructions yet
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support