--- base_model: spamsoms/LCM-kotosmix_diffusers tags: - openvino - openvino-export --- This model was converted to OpenVINO from [`spamsoms/LCM-kotosmix_diffusers`](https://huggingface.co/spamsoms/LCM-kotosmix_diffusers) using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space. First make sure you have optimum-intel installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python import huggingface_hub as hf_hub from optimum.intel import OVStableDiffusionPipeline from diffusers import LCMScheduler import torch model_id = "hsuwill000/LCM-kotosmix_diffusers-openvino" HIGH = 1024 WIDTH = 1024 batch_size = -1 # Or set it to a specific positive integer if needed prompt="agirl, anime," negative_prompt="(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy,\ extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, \ mutation, mutated, ugly, disgusting, blurry, amputation" pipe = OVStableDiffusionPipeline.from_pretrained( model_id, compile=False, ov_config={"CACHE_DIR": ""}, torch_dtype=torch.bfloat16, # More standard dtype for speed safety_checker=None, use_safetensors=False, ) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) print(pipe.scheduler.compatibles) pipe.reshape(batch_size=batch_size, height=HIGH, width=WIDTH, num_images_per_prompt=1) pipe.compile() image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=WIDTH, height=HIGH, guidance_scale=2, num_inference_steps=4, num_images_per_prompt=1, ).images[0] image.save("test.png") ```