SmolVLM: Redefining small and efficient multimodal models
Paper
โข
2504.05299
โข
Published
โข
158
None defined yet.
pip install -U huggingface_hub[hf_xet]
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="my-cool-company")
image = client.text_to_image(
"A majestic lion in a fantasy forest",
model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")
huggingface-cli upload-large-folder
. Designed for your massive models and datasets. Much recommended if you struggle to upload your Llama 70B fine-tuned model ๐คกpip install huggingface_hub==0.25.0
float16
. However, there's some precision loss somewhere and generation doesn't work in float16
mode yet. I'm looking into this and will keep you posted! Or take a look at this issue if you'd like to help: https://github.com/huggingface/swift-transformers/issues/95huggingface_hub
Python library!ModelHubMixin
!ModelHubMixin
integrations! HfFileSystem
!!huggingface_hub
Python library!PyTorchHubMixin
now supports configs and safetensors!audio-to-audio
supported in the InferenceClient!