Model Card for davidberenstein1957/Sana_600M_512px_diffusers
This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
Usage
First things first, you need to install the pruna library:
pip install pruna
You can then load this model using the following code:
from pruna import PrunaModel
loaded_model = PrunaModel.from_hub("davidberenstein1957/Sana_600M_512px_diffusers")
After loading the model, you can use the inference methods of the original model.
Smash Configuration
The compression configuration of the model is stored in the smash_config.json
file.
{
"batcher": null,
"cacher": null,
"compiler": null,
"factorizer": null,
"pruner": null,
"quantizer": "hqq_diffusers",
"hqq_diffusers_backend": "torchao_int4",
"hqq_diffusers_group_size": 64,
"hqq_diffusers_weight_bits": 8,
"batch_size": 1,
"device": "mps",
"save_fns": [
"hqq_diffusers"
],
"load_fns": [
"hqq_diffusers"
],
"reapply_after_load": {
"factorizer": null,
"pruner": null,
"quantizer": null,
"cacher": null,
"compiler": null,
"batcher": null
}
}
Model Configuration
The configuration of the model is stored in the config.json
file.
{}
π Join the Pruna AI community!
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support