Hub documentation
Model(s) Release Checklist
Model(s) Release Checklist
The Hugging Face Hub is the go-to platform for sharing machine learning models. A well-executed release can boost your model’s visibility and impact. This section covers essential steps for a concise, informative, and user-friendly model release.
⏳ Preparing Your Model for Release
Writing a Comprehensive Model Card
A well-crafted model card (the README.md
file in your repository) is essential for discoverability, reproducibility, and effective sharing. Your model card should include:
Metadata Configuration: The metadata section at the top of your model card (in YAML format) is crucial for discoverability and proper categorization. Be sure to include:
--- pipeline_tag: text-generation # Specify the task library_name: transformers # Specify the library language: - en # List language for your model license: apache-2.0 # Specify a license datasets: - username/dataset # List datasets used for training base_model: username/base-model # If applicable ---
Detailed Model Description: Provide a clear explanation of what your model does, its architecture, and its intended use cases. This helps users quickly understand if your model fits their needs.
Usage Examples: Provide clear, actionable code snippets that demonstrate how to use your model for inference, fine-tuning, or other common tasks. These examples should be ready to copy and run with minimal modifications.
Technical Specifications: Include information about training parameters, hardware requirements, and any other technical details that would help users understand how to effectively use your model.
Performance Metrics: Share comprehensive benchmarks and evaluation results. Include both quantitative metrics and qualitative examples to give users a complete picture of your model’s capabilities and limitations.
Limitations and Biases: Transparently document any known limitations, biases, or ethical considerations associated with your model. This helps users make informed decisions about whether and how to use your model.
Enhancing Model Discoverability and Usability
To maximize your model’s reach and usability:
Library Integration: If possible, add support for one of the many libraries integrated with the Hugging Face Hub (such as Transformers or Diffusers). This integration significantly increases your model’s accessibility and provides users with code snippets for working with your model.
For example, to specify that your model works with the Transformers library:
--- library_name: transformers ---
You can also create your own model library or add Hub support to another existing library or codebase.
Bonus: a recognised library also allows you to track downloads of your model over time.
Pipeline Tag Selection: Choose the correct pipeline tag that accurately reflects your model’s primary task. This tag determines how your model appears in search results and which widgets are displayed on your model page.
Examples of common pipeline tags:
text-generation
- For language models that generate texttext-to-image
- For text-to-image generation modelsimage-text-to-text
- For vision-language models (VLMs) that generate texttext-to-speech
- For models that generate audio from text
Research Papers: If your model has associated research papers, you can cite them in your model card and they will be linked automatically. This provides academic context, allows users to dive deeper into the theoretical foundations of your work, and increases citations.
## References * [Model Paper](https://arxiv.org/abs/xxxx.xxxxx)
Collections: If you’re releasing multiple related models or variants, organize them into a collection. Collections help users discover related models and understand the relationships between different versions or variants.
Demos: Create a Hugging Face Space with an interactive demo of your model. This allows users to try your model directly without writing any code, significantly lowering the barrier to adoption. You can also link the model from the Space to make it appear on the model page dedicated UI.
## Demo Try this model directly in your browser: [Space Demo](https://huggingface.co/spaces/username/model-demo)
Quantized Versions: Consider uploading quantized versions of your model (e.g., in GGUF or DDUF formats) to improve accessibility for users with limited computational resources. Link these versions using the
base_model
metadata field on the quantized model cards. You can also clearly document performance differences between the original and quantized versions.--- base_model: username/original-model base_model_relation: quantized ---
Linking Datasets on the Model Page: Link datasets in your
README.md
metadata to display those used directly from your model page.--- datasets: - username/dataset - username/dataset-2 ---
New Model Version: If your model is an update of an existing one, you can specify it on the older version model model card. This will display a banner on the older model’s page linking directly to this updated version.
--- new_version: username/updated-model ---
Visual Examples: For image or video generation models, include examples directly on your model page using the
<Gallery>
card component. Visual examples provide immediate insight into your model’s capabilities.<Gallery>   </Gallery>
Carbon Emissions: If possible, specify the carbon emissions associated with training your model. This information helps environmentally conscious users and organizations make informed decisions.
--- co2_eq_emissions: emissions: 123.45 source: "CodeCarbon" training_type: "pre-training" geographical_location: "US-East" hardware_used: "8xA100 GPUs" ---
Access Control and Visibility
Visibility Settings: Once everything is finalized and you’re ready to share your model with the world, switch your model to public visibility in your model settings. Before doing so, double-check all documentation and code examples to ensure they’re accurate and complete
Gated Access: If your model requires controlled access, use the gated access feature and clearly specify the conditions users must meet to gain access. This is particularly important for models with potential dual-use concerns or commercial restrictions.
🏁 After Releasing Your Model
A successful model release extends beyond the initial publication. To maximize impact and maintain quality:
Maintenance and Community Engagement
Verify Functionality: After release, verify that all provided code snippets work correctly by testing them in a clean environment. This ensures users can successfully implement your model without frustration.
For example, if your model is a transformers compatible LLM, you can try the following code snippet:
from transformers import pipeline # This should work without errors pipe = pipeline("text-generation", model="your-username/your-model") result = pipe("Your test prompt")
Share Share Share: Most people discover models on social media or in internal chat channels like your company Slack or email threads, so don’t hesitate to share links to your models. A good way to distribute the models is also by adding links on your website or GitHub projects. The more people visit and like your model, the higher it will go up in the trending sections of Hugging Face, leading to even more visibility!
Community Interaction: Engage with users in the Community Tab by answering questions, addressing feedback, and resolving issues quickly. Clarify confusion, adopt useful suggestions, and close off-topic discussions or pull requests to keep the space focused.
Tracking Usage and Impact
Usage Metrics: Monitor downloads and likes to track your model’s popularity and adoption. You can access total download metrics in your model settings.
Monitor Contributions: Regularly check your model tree to discover contributions made by the community. These contributions can provide valuable insights and potential collaboration opportunities.
Enterprise Features
Hugging Face Enterprise subscription offers additional capabilities:
Access Control: Set resource groups to control access for specific teams or users, ensuring appropriate permissions across your organization.
Storage Region: Select the data storage region (US/EU) for your model files to comply with regional data regulations and requirements.
Advanced Analytics: Use Enterprise Analytics features to gain deeper insights into usage patterns and adoption metrics.
Extended Storage: Access additional private storage capacity to host more models and larger artifacts as your model portfolio grows.
By following these comprehensive guidelines and examples, you’ll ensure your model release on Hugging Face is clear, impactful, and valuable. This will maximize the value of your work for the AI community and increase its visibility. Looking forward to your contributions!
< > Update on GitHub