Inference Providers documentation
Text to Video
Text to Video
Generate an video based on a given text prompt.
For more details about the text-to-video
task, check out its dedicated page! You will find examples and related materials.
Recommended models
- tencent/HunyuanVideo: A strong model for consistent video generation.
- Lightricks/LTX-Video: A text-to-video model with high fidelity motion and strong prompt adherence.
- Wan-AI/Wan2.1-T2V-1.3B: A robust model for video generation.
Explore all available models and find the one that suits you best here.
Using the API
Copied
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="fal-ai",
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx",
)
video = client.text_to_video(
"A young man walking on the street",
model="Wan-AI/Wan2.1-T2V-14B",
)
API specification
Request
Payload | ||
---|---|---|
inputs* | string | The input text data (sometimes called “prompt”) |
parameters | object | |
num_frames | number | The num_frames parameter determines how many video frames are generated. |
guidance_scale | number | A higher guidance scale value encourages the model to generate videos closely linked to the text prompt, but values too high may cause saturation and other artifacts. |
negative_prompt | string[] | One or several prompt to guide what NOT to include in video generation. |
num_inference_steps | integer | The number of denoising steps. More denoising steps usually lead to a higher quality video at the expense of slower inference. |
seed | integer | Seed for the random number generator. |
Some options can be configured by passing headers to the Inference API. Here are the available headers:
Headers | ||
---|---|---|
authorization | string | Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page. |
x-use-cache | boolean, default to true | There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here. |
x-wait-for-model | boolean, default to false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here. |
For more information about Inference API headers, check out the parameters guide.
Response
Body | ||
---|---|---|
video | unknown | The generated video returned as raw bytes in the payload. |