Dataset Viewer
text
stringlengths 38
361k
| type
stringclasses 1
value | start
int64 156
155k
| end
int64 451
418k
| depth
int64 0
0
| filepath
stringlengths 87
141
| parent_class
null | class_index
int64 0
305
|
---|---|---|---|---|---|---|---|
class EvalResult:
"""
Flattened representation of individual evaluation results found in model-index of Model Cards.
For more information on the model-index spec, see https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1.
Args:
task_type (`str`):
The task identifier. Example: "image-classification".
dataset_type (`str`):
The dataset identifier. Example: "common_voice". Use dataset id from https://hf.co/datasets.
dataset_name (`str`):
A pretty name for the dataset. Example: "Common Voice (French)".
metric_type (`str`):
The metric identifier. Example: "wer". Use metric id from https://hf.co/metrics.
metric_value (`Any`):
The metric value. Example: 0.9 or "20.0 ± 1.2".
task_name (`str`, *optional*):
A pretty name for the task. Example: "Speech Recognition".
dataset_config (`str`, *optional*):
The name of the dataset configuration used in `load_dataset()`.
Example: fr in `load_dataset("common_voice", "fr")`. See the `datasets` docs for more info:
https://hf.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
dataset_split (`str`, *optional*):
The split used in `load_dataset()`. Example: "test".
dataset_revision (`str`, *optional*):
The revision (AKA Git Sha) of the dataset used in `load_dataset()`.
Example: 5503434ddd753f426f4b38109466949a1217c2bb
dataset_args (`Dict[str, Any]`, *optional*):
The arguments passed during `Metric.compute()`. Example for `bleu`: `{"max_order": 4}`
metric_name (`str`, *optional*):
A pretty name for the metric. Example: "Test WER".
metric_config (`str`, *optional*):
The name of the metric configuration used in `load_metric()`.
Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`.
See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations
metric_args (`Dict[str, Any]`, *optional*):
The arguments passed during `Metric.compute()`. Example for `bleu`: max_order: 4
verified (`bool`, *optional*):
Indicates whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. Automatically computed by Hugging Face, do not set.
verify_token (`str`, *optional*):
A JSON Web Token that is used to verify whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not.
source_name (`str`, *optional*):
The name of the source of the evaluation result. Example: "Open LLM Leaderboard".
source_url (`str`, *optional*):
The URL of the source of the evaluation result. Example: "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard".
"""
# Required
# The task identifier
# Example: automatic-speech-recognition
task_type: str
# The dataset identifier
# Example: common_voice. Use dataset id from https://hf.co/datasets
dataset_type: str
# A pretty name for the dataset.
# Example: Common Voice (French)
dataset_name: str
# The metric identifier
# Example: wer. Use metric id from https://hf.co/metrics
metric_type: str
# Value of the metric.
# Example: 20.0 or "20.0 ± 1.2"
metric_value: Any
# Optional
# A pretty name for the task.
# Example: Speech Recognition
task_name: Optional[str] = None
# The name of the dataset configuration used in `load_dataset()`.
# Example: fr in `load_dataset("common_voice", "fr")`.
# See the `datasets` docs for more info:
# https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
dataset_config: Optional[str] = None
# The split used in `load_dataset()`.
# Example: test
dataset_split: Optional[str] = None
# The revision (AKA Git Sha) of the dataset used in `load_dataset()`.
# Example: 5503434ddd753f426f4b38109466949a1217c2bb
dataset_revision: Optional[str] = None
# The arguments passed during `Metric.compute()`.
# Example for `bleu`: max_order: 4
dataset_args: Optional[Dict[str, Any]] = None
# A pretty name for the metric.
# Example: Test WER
metric_name: Optional[str] = None
# The name of the metric configuration used in `load_metric()`.
# Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`.
# See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations
metric_config: Optional[str] = None
# The arguments passed during `Metric.compute()`.
# Example for `bleu`: max_order: 4
metric_args: Optional[Dict[str, Any]] = None
# Indicates whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. Automatically computed by Hugging Face, do not set.
verified: Optional[bool] = None
# A JSON Web Token that is used to verify whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not.
verify_token: Optional[str] = None
# The name of the source of the evaluation result.
# Example: Open LLM Leaderboard
source_name: Optional[str] = None
# The URL of the source of the evaluation result.
# Example: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
source_url: Optional[str] = None
@property
def unique_identifier(self) -> tuple:
"""Returns a tuple that uniquely identifies this evaluation."""
return (
self.task_type,
self.dataset_type,
self.dataset_config,
self.dataset_split,
self.dataset_revision,
)
def is_equal_except_value(self, other: "EvalResult") -> bool:
"""
Return True if `self` and `other` describe exactly the same metric but with a
different value.
"""
for key, _ in self.__dict__.items():
if key == "metric_value":
continue
# For metrics computed by Hugging Face's evaluation service, `verify_token` is derived from `metric_value`,
# so we exclude it here in the comparison.
if key != "verify_token" and getattr(self, key) != getattr(other, key):
return False
return True
def __post_init__(self) -> None:
if self.source_name is not None and self.source_url is None:
raise ValueError("If `source_name` is provided, `source_url` must also be provided.") | class_definition | 248 | 7,185 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/repocard_data.py | null | 0 |
class CardData:
"""Structure containing metadata from a RepoCard.
[`CardData`] is the parent class of [`ModelCardData`] and [`DatasetCardData`].
Metadata can be exported as a dictionary or YAML. Export can be customized to alter the representation of the data
(example: flatten evaluation results). `CardData` behaves as a dictionary (can get, pop, set values) but do not
inherit from `dict` to allow this export step.
"""
def __init__(self, ignore_metadata_errors: bool = False, **kwargs):
self.__dict__.update(kwargs)
def to_dict(self):
"""Converts CardData to a dict.
Returns:
`dict`: CardData represented as a dictionary ready to be dumped to a YAML
block for inclusion in a README.md file.
"""
data_dict = copy.deepcopy(self.__dict__)
self._to_dict(data_dict)
return {key: value for key, value in data_dict.items() if value is not None}
def _to_dict(self, data_dict):
"""Use this method in child classes to alter the dict representation of the data. Alter the dict in-place.
Args:
data_dict (`dict`): The raw dict representation of the card data.
"""
pass
def to_yaml(self, line_break=None, original_order: Optional[List[str]] = None) -> str:
"""Dumps CardData to a YAML block for inclusion in a README.md file.
Args:
line_break (str, *optional*):
The line break to use when dumping to yaml.
Returns:
`str`: CardData represented as a YAML block.
"""
if original_order:
self.__dict__ = {
k: self.__dict__[k]
for k in original_order + list(set(self.__dict__.keys()) - set(original_order))
if k in self.__dict__
}
return yaml_dump(self.to_dict(), sort_keys=False, line_break=line_break).strip()
def __repr__(self):
return repr(self.__dict__)
def __str__(self):
return self.to_yaml()
def get(self, key: str, default: Any = None) -> Any:
"""Get value for a given metadata key."""
return self.__dict__.get(key, default)
def pop(self, key: str, default: Any = None) -> Any:
"""Pop value for a given metadata key."""
return self.__dict__.pop(key, default)
def __getitem__(self, key: str) -> Any:
"""Get value for a given metadata key."""
return self.__dict__[key]
def __setitem__(self, key: str, value: Any) -> None:
"""Set value for a given metadata key."""
self.__dict__[key] = value
def __contains__(self, key: str) -> bool:
"""Check if a given metadata key is set."""
return key in self.__dict__
def __len__(self) -> int:
"""Return the number of metadata keys set."""
return len(self.__dict__) | class_definition | 7,199 | 10,080 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/repocard_data.py | null | 1 |
class ModelCardData(CardData):
"""Model Card Metadata that is used by Hugging Face Hub when included at the top of your README.md
Args:
base_model (`str` or `List[str]`, *optional*):
The identifier of the base model from which the model derives. This is applicable for example if your model is a
fine-tune or adapter of an existing model. The value must be the ID of a model on the Hub (or a list of IDs
if your model derives from multiple models). Defaults to None.
datasets (`Union[str, List[str]]`, *optional*):
Dataset or list of datasets that were used to train this model. Should be a dataset ID
found on https://hf.co/datasets. Defaults to None.
eval_results (`Union[List[EvalResult], EvalResult]`, *optional*):
List of `huggingface_hub.EvalResult` that define evaluation results of the model. If provided,
`model_name` is used to as a name on PapersWithCode's leaderboards. Defaults to `None`.
language (`Union[str, List[str]]`, *optional*):
Language of model's training data or metadata. It must be an ISO 639-1, 639-2 or
639-3 code (two/three letters), or a special value like "code", "multilingual". Defaults to `None`.
library_name (`str`, *optional*):
Name of library used by this model. Example: keras or any library from
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts.
Defaults to None.
license (`str`, *optional*):
License of this model. Example: apache-2.0 or any license from
https://huggingface.co/docs/hub/repositories-licenses. Defaults to None.
license_name (`str`, *optional*):
Name of the license of this model. Defaults to None. To be used in conjunction with `license_link`.
Common licenses (Apache-2.0, MIT, CC-BY-SA-4.0) do not need a name. In that case, use `license` instead.
license_link (`str`, *optional*):
Link to the license of this model. Defaults to None. To be used in conjunction with `license_name`.
Common licenses (Apache-2.0, MIT, CC-BY-SA-4.0) do not need a link. In that case, use `license` instead.
metrics (`List[str]`, *optional*):
List of metrics used to evaluate this model. Should be a metric name that can be found
at https://hf.co/metrics. Example: 'accuracy'. Defaults to None.
model_name (`str`, *optional*):
A name for this model. It is used along with
`eval_results` to construct the `model-index` within the card's metadata. The name
you supply here is what will be used on PapersWithCode's leaderboards. If None is provided
then the repo name is used as a default. Defaults to None.
pipeline_tag (`str`, *optional*):
The pipeline tag associated with the model. Example: "text-classification".
tags (`List[str]`, *optional*):
List of tags to add to your model that can be used when filtering on the Hugging
Face Hub. Defaults to None.
ignore_metadata_errors (`str`):
If True, errors while parsing the metadata section will be ignored. Some information might be lost during
the process. Use it at your own risk.
kwargs (`dict`, *optional*):
Additional metadata that will be added to the model card. Defaults to None.
Example:
```python
>>> from huggingface_hub import ModelCardData
>>> card_data = ModelCardData(
... language="en",
... license="mit",
... library_name="timm",
... tags=['image-classification', 'resnet'],
... )
>>> card_data.to_dict()
{'language': 'en', 'license': 'mit', 'library_name': 'timm', 'tags': ['image-classification', 'resnet']}
```
"""
def __init__(
self,
*,
base_model: Optional[Union[str, List[str]]] = None,
datasets: Optional[Union[str, List[str]]] = None,
eval_results: Optional[List[EvalResult]] = None,
language: Optional[Union[str, List[str]]] = None,
library_name: Optional[str] = None,
license: Optional[str] = None,
license_name: Optional[str] = None,
license_link: Optional[str] = None,
metrics: Optional[List[str]] = None,
model_name: Optional[str] = None,
pipeline_tag: Optional[str] = None,
tags: Optional[List[str]] = None,
ignore_metadata_errors: bool = False,
**kwargs,
):
self.base_model = base_model
self.datasets = datasets
self.eval_results = eval_results
self.language = language
self.library_name = library_name
self.license = license
self.license_name = license_name
self.license_link = license_link
self.metrics = metrics
self.model_name = model_name
self.pipeline_tag = pipeline_tag
self.tags = _to_unique_list(tags)
model_index = kwargs.pop("model-index", None)
if model_index:
try:
model_name, eval_results = model_index_to_eval_results(model_index)
self.model_name = model_name
self.eval_results = eval_results
except (KeyError, TypeError) as error:
if ignore_metadata_errors:
logger.warning("Invalid model-index. Not loading eval results into CardData.")
else:
raise ValueError(
f"Invalid `model_index` in metadata cannot be parsed: {error.__class__} {error}. Pass"
" `ignore_metadata_errors=True` to ignore this error while loading a Model Card. Warning:"
" some information will be lost. Use it at your own risk."
)
super().__init__(**kwargs)
if self.eval_results:
if isinstance(self.eval_results, EvalResult):
self.eval_results = [self.eval_results]
if self.model_name is None:
raise ValueError("Passing `eval_results` requires `model_name` to be set.")
def _to_dict(self, data_dict):
"""Format the internal data dict. In this case, we convert eval results to a valid model index"""
if self.eval_results is not None:
data_dict["model-index"] = eval_results_to_model_index(self.model_name, self.eval_results)
del data_dict["eval_results"], data_dict["model_name"] | class_definition | 10,083 | 16,711 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/repocard_data.py | null | 2 |
class DatasetCardData(CardData):
"""Dataset Card Metadata that is used by Hugging Face Hub when included at the top of your README.md
Args:
language (`List[str]`, *optional*):
Language of dataset's data or metadata. It must be an ISO 639-1, 639-2 or
639-3 code (two/three letters), or a special value like "code", "multilingual".
license (`Union[str, List[str]]`, *optional*):
License(s) of this dataset. Example: apache-2.0 or any license from
https://huggingface.co/docs/hub/repositories-licenses.
annotations_creators (`Union[str, List[str]]`, *optional*):
How the annotations for the dataset were created.
Options are: 'found', 'crowdsourced', 'expert-generated', 'machine-generated', 'no-annotation', 'other'.
language_creators (`Union[str, List[str]]`, *optional*):
How the text-based data in the dataset was created.
Options are: 'found', 'crowdsourced', 'expert-generated', 'machine-generated', 'other'
multilinguality (`Union[str, List[str]]`, *optional*):
Whether the dataset is multilingual.
Options are: 'monolingual', 'multilingual', 'translation', 'other'.
size_categories (`Union[str, List[str]]`, *optional*):
The number of examples in the dataset. Options are: 'n<1K', '1K<n<10K', '10K<n<100K',
'100K<n<1M', '1M<n<10M', '10M<n<100M', '100M<n<1B', '1B<n<10B', '10B<n<100B', '100B<n<1T', 'n>1T', and 'other'.
source_datasets (`List[str]]`, *optional*):
Indicates whether the dataset is an original dataset or extended from another existing dataset.
Options are: 'original' and 'extended'.
task_categories (`Union[str, List[str]]`, *optional*):
What categories of task does the dataset support?
task_ids (`Union[str, List[str]]`, *optional*):
What specific tasks does the dataset support?
paperswithcode_id (`str`, *optional*):
ID of the dataset on PapersWithCode.
pretty_name (`str`, *optional*):
A more human-readable name for the dataset. (ex. "Cats vs. Dogs")
train_eval_index (`Dict`, *optional*):
A dictionary that describes the necessary spec for doing evaluation on the Hub.
If not provided, it will be gathered from the 'train-eval-index' key of the kwargs.
config_names (`Union[str, List[str]]`, *optional*):
A list of the available dataset configs for the dataset.
"""
def __init__(
self,
*,
language: Optional[Union[str, List[str]]] = None,
license: Optional[Union[str, List[str]]] = None,
annotations_creators: Optional[Union[str, List[str]]] = None,
language_creators: Optional[Union[str, List[str]]] = None,
multilinguality: Optional[Union[str, List[str]]] = None,
size_categories: Optional[Union[str, List[str]]] = None,
source_datasets: Optional[List[str]] = None,
task_categories: Optional[Union[str, List[str]]] = None,
task_ids: Optional[Union[str, List[str]]] = None,
paperswithcode_id: Optional[str] = None,
pretty_name: Optional[str] = None,
train_eval_index: Optional[Dict] = None,
config_names: Optional[Union[str, List[str]]] = None,
ignore_metadata_errors: bool = False,
**kwargs,
):
self.annotations_creators = annotations_creators
self.language_creators = language_creators
self.language = language
self.license = license
self.multilinguality = multilinguality
self.size_categories = size_categories
self.source_datasets = source_datasets
self.task_categories = task_categories
self.task_ids = task_ids
self.paperswithcode_id = paperswithcode_id
self.pretty_name = pretty_name
self.config_names = config_names
# TODO - maybe handle this similarly to EvalResult?
self.train_eval_index = train_eval_index or kwargs.pop("train-eval-index", None)
super().__init__(**kwargs)
def _to_dict(self, data_dict):
data_dict["train-eval-index"] = data_dict.pop("train_eval_index") | class_definition | 16,714 | 20,971 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/repocard_data.py | null | 3 |
class SpaceCardData(CardData):
"""Space Card Metadata that is used by Hugging Face Hub when included at the top of your README.md
To get an exhaustive reference of Spaces configuration, please visit https://huggingface.co/docs/hub/spaces-config-reference#spaces-configuration-reference.
Args:
title (`str`, *optional*)
Title of the Space.
sdk (`str`, *optional*)
SDK of the Space (one of `gradio`, `streamlit`, `docker`, or `static`).
sdk_version (`str`, *optional*)
Version of the used SDK (if Gradio/Streamlit sdk).
python_version (`str`, *optional*)
Python version used in the Space (if Gradio/Streamlit sdk).
app_file (`str`, *optional*)
Path to your main application file (which contains either gradio or streamlit Python code, or static html code).
Path is relative to the root of the repository.
app_port (`str`, *optional*)
Port on which your application is running. Used only if sdk is `docker`.
license (`str`, *optional*)
License of this model. Example: apache-2.0 or any license from
https://huggingface.co/docs/hub/repositories-licenses.
duplicated_from (`str`, *optional*)
ID of the original Space if this is a duplicated Space.
models (List[`str`], *optional*)
List of models related to this Space. Should be a dataset ID found on https://hf.co/models.
datasets (`List[str]`, *optional*)
List of datasets related to this Space. Should be a dataset ID found on https://hf.co/datasets.
tags (`List[str]`, *optional*)
List of tags to add to your Space that can be used when filtering on the Hub.
ignore_metadata_errors (`str`):
If True, errors while parsing the metadata section will be ignored. Some information might be lost during
the process. Use it at your own risk.
kwargs (`dict`, *optional*):
Additional metadata that will be added to the space card.
Example:
```python
>>> from huggingface_hub import SpaceCardData
>>> card_data = SpaceCardData(
... title="Dreambooth Training",
... license="mit",
... sdk="gradio",
... duplicated_from="multimodalart/dreambooth-training"
... )
>>> card_data.to_dict()
{'title': 'Dreambooth Training', 'sdk': 'gradio', 'license': 'mit', 'duplicated_from': 'multimodalart/dreambooth-training'}
```
"""
def __init__(
self,
*,
title: Optional[str] = None,
sdk: Optional[str] = None,
sdk_version: Optional[str] = None,
python_version: Optional[str] = None,
app_file: Optional[str] = None,
app_port: Optional[int] = None,
license: Optional[str] = None,
duplicated_from: Optional[str] = None,
models: Optional[List[str]] = None,
datasets: Optional[List[str]] = None,
tags: Optional[List[str]] = None,
ignore_metadata_errors: bool = False,
**kwargs,
):
self.title = title
self.sdk = sdk
self.sdk_version = sdk_version
self.python_version = python_version
self.app_file = app_file
self.app_port = app_port
self.license = license
self.duplicated_from = duplicated_from
self.models = models
self.datasets = datasets
self.tags = _to_unique_list(tags)
super().__init__(**kwargs) | class_definition | 20,974 | 24,542 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/repocard_data.py | null | 4 |
class BaseModel: # type: ignore [no-redef]
def __init__(self, *args, **kwargs) -> None:
raise ImportError(
"You must have `pydantic` installed to use `WebhookPayload`. This is an optional dependency that"
" should be installed separately. Please run `pip install --upgrade pydantic` and retry."
) | class_definition | 986 | 1,347 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 5 |
class ObjectId(BaseModel):
id: str | class_definition | 1,991 | 2,029 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 6 |
class WebhookPayloadUrl(BaseModel):
web: str
api: Optional[str] = None | class_definition | 2,032 | 2,110 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 7 |
class WebhookPayloadMovedTo(BaseModel):
name: str
owner: ObjectId | class_definition | 2,113 | 2,186 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 8 |
class WebhookPayloadWebhook(ObjectId):
version: SupportedWebhookVersion | class_definition | 2,189 | 2,264 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 9 |
class WebhookPayloadEvent(BaseModel):
action: WebhookEvent_T
scope: str | class_definition | 2,267 | 2,346 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 10 |
class WebhookPayloadDiscussionChanges(BaseModel):
base: str
mergeCommitId: Optional[str] = None | class_definition | 2,349 | 2,452 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 11 |
class WebhookPayloadComment(ObjectId):
author: ObjectId
hidden: bool
content: Optional[str] = None
url: WebhookPayloadUrl | class_definition | 2,455 | 2,592 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 12 |
class WebhookPayloadDiscussion(ObjectId):
num: int
author: ObjectId
url: WebhookPayloadUrl
title: str
isPullRequest: bool
status: DiscussionStatus_T
changes: Optional[WebhookPayloadDiscussionChanges] = None
pinned: Optional[bool] = None | class_definition | 2,595 | 2,863 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 13 |
class WebhookPayloadRepo(ObjectId):
owner: ObjectId
head_sha: Optional[str] = None
name: str
private: bool
subdomain: Optional[str] = None
tags: Optional[List[str]] = None
type: Literal["dataset", "model", "space"]
url: WebhookPayloadUrl | class_definition | 2,866 | 3,135 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 14 |
class WebhookPayloadUpdatedRef(BaseModel):
ref: str
oldSha: Optional[str] = None
newSha: Optional[str] = None | class_definition | 3,138 | 3,259 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 15 |
class WebhookPayload(BaseModel):
event: WebhookPayloadEvent
repo: WebhookPayloadRepo
discussion: Optional[WebhookPayloadDiscussion] = None
comment: Optional[WebhookPayloadComment] = None
webhook: WebhookPayloadWebhook
movedTo: Optional[WebhookPayloadMovedTo] = None
updatedRefs: Optional[List[WebhookPayloadUpdatedRef]] = None | class_definition | 3,262 | 3,616 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_payload.py | null | 16 |
class MixinInfo:
model_card_template: str
model_card_data: ModelCardData
repo_url: Optional[str] = None
docs_url: Optional[str] = None | class_definition | 1,902 | 2,052 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hub_mixin.py | null | 17 |
class ModelHubMixin:
"""
A generic mixin to integrate ANY machine learning framework with the Hub.
To integrate your framework, your model class must inherit from this class. Custom logic for saving/loading models
have to be overwritten in [`_from_pretrained`] and [`_save_pretrained`]. [`PyTorchModelHubMixin`] is a good example
of mixin integration with the Hub. Check out our [integration guide](../guides/integrations) for more instructions.
When inheriting from [`ModelHubMixin`], you can define class-level attributes. These attributes are not passed to
`__init__` but to the class definition itself. This is useful to define metadata about the library integrating
[`ModelHubMixin`].
For more details on how to integrate the mixin with your library, checkout the [integration guide](../guides/integrations).
Args:
repo_url (`str`, *optional*):
URL of the library repository. Used to generate model card.
docs_url (`str`, *optional*):
URL of the library documentation. Used to generate model card.
model_card_template (`str`, *optional*):
Template of the model card. Used to generate model card. Defaults to a generic template.
language (`str` or `List[str]`, *optional*):
Language supported by the library. Used to generate model card.
library_name (`str`, *optional*):
Name of the library integrating ModelHubMixin. Used to generate model card.
license (`str`, *optional*):
License of the library integrating ModelHubMixin. Used to generate model card.
E.g: "apache-2.0"
license_name (`str`, *optional*):
Name of the library integrating ModelHubMixin. Used to generate model card.
Only used if `license` is set to `other`.
E.g: "coqui-public-model-license".
license_link (`str`, *optional*):
URL to the license of the library integrating ModelHubMixin. Used to generate model card.
Only used if `license` is set to `other` and `license_name` is set.
E.g: "https://coqui.ai/cpml".
pipeline_tag (`str`, *optional*):
Tag of the pipeline. Used to generate model card. E.g. "text-classification".
tags (`List[str]`, *optional*):
Tags to be added to the model card. Used to generate model card. E.g. ["x-custom-tag", "arxiv:2304.12244"]
coders (`Dict[Type, Tuple[Callable, Callable]]`, *optional*):
Dictionary of custom types and their encoders/decoders. Used to encode/decode arguments that are not
jsonable by default. E.g dataclasses, argparse.Namespace, OmegaConf, etc.
Example:
```python
>>> from huggingface_hub import ModelHubMixin
# Inherit from ModelHubMixin
>>> class MyCustomModel(
... ModelHubMixin,
... library_name="my-library",
... tags=["x-custom-tag", "arxiv:2304.12244"],
... repo_url="https://github.com/huggingface/my-cool-library",
... docs_url="https://huggingface.co/docs/my-cool-library",
... # ^ optional metadata to generate model card
... ):
... def __init__(self, size: int = 512, device: str = "cpu"):
... # define how to initialize your model
... super().__init__()
... ...
...
... def _save_pretrained(self, save_directory: Path) -> None:
... # define how to serialize your model
... ...
...
... @classmethod
... def from_pretrained(
... cls: Type[T],
... pretrained_model_name_or_path: Union[str, Path],
... *,
... force_download: bool = False,
... resume_download: Optional[bool] = None,
... proxies: Optional[Dict] = None,
... token: Optional[Union[str, bool]] = None,
... cache_dir: Optional[Union[str, Path]] = None,
... local_files_only: bool = False,
... revision: Optional[str] = None,
... **model_kwargs,
... ) -> T:
... # define how to deserialize your model
... ...
>>> model = MyCustomModel(size=256, device="gpu")
# Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")
# Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")
# Download and initialize weights from the Hub
>>> reloaded_model = MyCustomModel.from_pretrained("username/my-awesome-model")
>>> reloaded_model.size
256
# Model card has been correctly populated
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["x-custom-tag", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"my-library"
```
"""
_hub_mixin_config: Optional[Union[dict, "DataclassInstance"]] = None
# ^ optional config attribute automatically set in `from_pretrained`
_hub_mixin_info: MixinInfo
# ^ information about the library integrating ModelHubMixin (used to generate model card)
_hub_mixin_inject_config: bool # whether `_from_pretrained` expects `config` or not
_hub_mixin_init_parameters: Dict[str, inspect.Parameter] # __init__ parameters
_hub_mixin_jsonable_default_values: Dict[str, Any] # default values for __init__ parameters
_hub_mixin_jsonable_custom_types: Tuple[Type, ...] # custom types that can be encoded/decoded
_hub_mixin_coders: Dict[Type, CODER_T] # encoders/decoders for custom types
# ^ internal values to handle config
def __init_subclass__(
cls,
*,
# Generic info for model card
repo_url: Optional[str] = None,
docs_url: Optional[str] = None,
# Model card template
model_card_template: str = DEFAULT_MODEL_CARD,
# Model card metadata
language: Optional[List[str]] = None,
library_name: Optional[str] = None,
license: Optional[str] = None,
license_name: Optional[str] = None,
license_link: Optional[str] = None,
pipeline_tag: Optional[str] = None,
tags: Optional[List[str]] = None,
# How to encode/decode arguments with custom type into a JSON config?
coders: Optional[
Dict[Type, CODER_T]
# Key is a type.
# Value is a tuple (encoder, decoder).
# Example: {MyCustomType: (lambda x: x.value, lambda data: MyCustomType(data))}
] = None,
) -> None:
"""Inspect __init__ signature only once when subclassing + handle modelcard."""
super().__init_subclass__()
# Will be reused when creating modelcard
tags = tags or []
tags.append("model_hub_mixin")
# Initialize MixinInfo if not existent
info = MixinInfo(model_card_template=model_card_template, model_card_data=ModelCardData())
# If parent class has a MixinInfo, inherit from it as a copy
if hasattr(cls, "_hub_mixin_info"):
# Inherit model card template from parent class if not explicitly set
if model_card_template == DEFAULT_MODEL_CARD:
info.model_card_template = cls._hub_mixin_info.model_card_template
# Inherit from parent model card data
info.model_card_data = ModelCardData(**cls._hub_mixin_info.model_card_data.to_dict())
# Inherit other info
info.docs_url = cls._hub_mixin_info.docs_url
info.repo_url = cls._hub_mixin_info.repo_url
cls._hub_mixin_info = info
# Update MixinInfo with metadata
if model_card_template is not None and model_card_template != DEFAULT_MODEL_CARD:
info.model_card_template = model_card_template
if repo_url is not None:
info.repo_url = repo_url
if docs_url is not None:
info.docs_url = docs_url
if language is not None:
info.model_card_data.language = language
if library_name is not None:
info.model_card_data.library_name = library_name
if license is not None:
info.model_card_data.license = license
if license_name is not None:
info.model_card_data.license_name = license_name
if license_link is not None:
info.model_card_data.license_link = license_link
if pipeline_tag is not None:
info.model_card_data.pipeline_tag = pipeline_tag
if tags is not None:
if info.model_card_data.tags is not None:
info.model_card_data.tags.extend(tags)
else:
info.model_card_data.tags = tags
info.model_card_data.tags = sorted(set(info.model_card_data.tags))
# Handle encoders/decoders for args
cls._hub_mixin_coders = coders or {}
cls._hub_mixin_jsonable_custom_types = tuple(cls._hub_mixin_coders.keys())
# Inspect __init__ signature to handle config
cls._hub_mixin_init_parameters = dict(inspect.signature(cls.__init__).parameters)
cls._hub_mixin_jsonable_default_values = {
param.name: cls._encode_arg(param.default)
for param in cls._hub_mixin_init_parameters.values()
if param.default is not inspect.Parameter.empty and cls._is_jsonable(param.default)
}
cls._hub_mixin_inject_config = "config" in inspect.signature(cls._from_pretrained).parameters
def __new__(cls: Type[T], *args, **kwargs) -> T:
"""Create a new instance of the class and handle config.
3 cases:
- If `self._hub_mixin_config` is already set, do nothing.
- If `config` is passed as a dataclass, set it as `self._hub_mixin_config`.
- Otherwise, build `self._hub_mixin_config` from default values and passed values.
"""
instance = super().__new__(cls)
# If `config` is already set, return early
if instance._hub_mixin_config is not None:
return instance
# Infer passed values
passed_values = {
**{
key: value
for key, value in zip(
# [1:] to skip `self` parameter
list(cls._hub_mixin_init_parameters)[1:],
args,
)
},
**kwargs,
}
# If config passed as dataclass => set it and return early
if is_dataclass(passed_values.get("config")):
instance._hub_mixin_config = passed_values["config"]
return instance
# Otherwise, build config from default + passed values
init_config = {
# default values
**cls._hub_mixin_jsonable_default_values,
# passed values
**{
key: cls._encode_arg(value) # Encode custom types as jsonable value
for key, value in passed_values.items()
if instance._is_jsonable(value) # Only if jsonable or we have a custom encoder
},
}
passed_config = init_config.pop("config", {})
# Populate `init_config` with provided config
if isinstance(passed_config, dict):
init_config.update(passed_config)
# Set `config` attribute and return
if init_config != {}:
instance._hub_mixin_config = init_config
return instance
@classmethod
def _is_jsonable(cls, value: Any) -> bool:
"""Check if a value is JSON serializable."""
if isinstance(value, cls._hub_mixin_jsonable_custom_types):
return True
return is_jsonable(value)
@classmethod
def _encode_arg(cls, arg: Any) -> Any:
"""Encode an argument into a JSON serializable format."""
for type_, (encoder, _) in cls._hub_mixin_coders.items():
if isinstance(arg, type_):
if arg is None:
return None
return encoder(arg)
return arg
@classmethod
def _decode_arg(cls, expected_type: Type[ARGS_T], value: Any) -> Optional[ARGS_T]:
"""Decode a JSON serializable value into an argument."""
if is_simple_optional_type(expected_type):
if value is None:
return None
expected_type = unwrap_simple_optional_type(expected_type)
# Dataclass => handle it
if is_dataclass(expected_type):
return _load_dataclass(expected_type, value) # type: ignore[return-value]
# Otherwise => check custom decoders
for type_, (_, decoder) in cls._hub_mixin_coders.items():
if inspect.isclass(expected_type) and issubclass(expected_type, type_):
return decoder(value)
# Otherwise => don't decode
return value
def save_pretrained(
self,
save_directory: Union[str, Path],
*,
config: Optional[Union[dict, "DataclassInstance"]] = None,
repo_id: Optional[str] = None,
push_to_hub: bool = False,
model_card_kwargs: Optional[Dict[str, Any]] = None,
**push_to_hub_kwargs,
) -> Optional[str]:
"""
Save weights in local directory.
Args:
save_directory (`str` or `Path`):
Path to directory in which the model weights and configuration will be saved.
config (`dict` or `DataclassInstance`, *optional*):
Model configuration specified as a key/value dictionary or a dataclass instance.
push_to_hub (`bool`, *optional*, defaults to `False`):
Whether or not to push your model to the Huggingface Hub after saving it.
repo_id (`str`, *optional*):
ID of your repository on the Hub. Used only if `push_to_hub=True`. Will default to the folder name if
not provided.
model_card_kwargs (`Dict[str, Any]`, *optional*):
Additional arguments passed to the model card template to customize the model card.
push_to_hub_kwargs:
Additional key word arguments passed along to the [`~ModelHubMixin.push_to_hub`] method.
Returns:
`str` or `None`: url of the commit on the Hub if `push_to_hub=True`, `None` otherwise.
"""
save_directory = Path(save_directory)
save_directory.mkdir(parents=True, exist_ok=True)
# Remove config.json if already exists. After `_save_pretrained` we don't want to overwrite config.json
# as it might have been saved by the custom `_save_pretrained` already. However we do want to overwrite
# an existing config.json if it was not saved by `_save_pretrained`.
config_path = save_directory / constants.CONFIG_NAME
config_path.unlink(missing_ok=True)
# save model weights/files (framework-specific)
self._save_pretrained(save_directory)
# save config (if provided and if not serialized yet in `_save_pretrained`)
if config is None:
config = self._hub_mixin_config
if config is not None:
if is_dataclass(config):
config = asdict(config) # type: ignore[arg-type]
if not config_path.exists():
config_str = json.dumps(config, sort_keys=True, indent=2)
config_path.write_text(config_str)
# save model card
model_card_path = save_directory / "README.md"
model_card_kwargs = model_card_kwargs if model_card_kwargs is not None else {}
if not model_card_path.exists(): # do not overwrite if already exists
self.generate_model_card(**model_card_kwargs).save(save_directory / "README.md")
# push to the Hub if required
if push_to_hub:
kwargs = push_to_hub_kwargs.copy() # soft-copy to avoid mutating input
if config is not None: # kwarg for `push_to_hub`
kwargs["config"] = config
if repo_id is None:
repo_id = save_directory.name # Defaults to `save_directory` name
return self.push_to_hub(repo_id=repo_id, model_card_kwargs=model_card_kwargs, **kwargs)
return None
def _save_pretrained(self, save_directory: Path) -> None:
"""
Overwrite this method in subclass to define how to save your model.
Check out our [integration guide](../guides/integrations) for instructions.
Args:
save_directory (`str` or `Path`):
Path to directory in which the model weights and configuration will be saved.
"""
raise NotImplementedError
@classmethod
@validate_hf_hub_args
def from_pretrained(
cls: Type[T],
pretrained_model_name_or_path: Union[str, Path],
*,
force_download: bool = False,
resume_download: Optional[bool] = None,
proxies: Optional[Dict] = None,
token: Optional[Union[str, bool]] = None,
cache_dir: Optional[Union[str, Path]] = None,
local_files_only: bool = False,
revision: Optional[str] = None,
**model_kwargs,
) -> T:
"""
Download a model from the Huggingface Hub and instantiate it.
Args:
pretrained_model_name_or_path (`str`, `Path`):
- Either the `model_id` (string) of a model hosted on the Hub, e.g. `bigscience/bloom`.
- Or a path to a `directory` containing model weights saved using
[`~transformers.PreTrainedModel.save_pretrained`], e.g., `../path/to/my_model_directory/`.
revision (`str`, *optional*):
Revision of the model on the Hub. Can be a branch name, a git tag or any commit id.
Defaults to the latest commit on `main` branch.
force_download (`bool`, *optional*, defaults to `False`):
Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
the existing cache.
proxies (`Dict[str, str]`, *optional*):
A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
'http://hostname': 'foo.bar:4012'}`. The proxies are used on every request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. By default, it will use the token
cached when running `huggingface-cli login`.
cache_dir (`str`, `Path`, *optional*):
Path to the folder where cached files are stored.
local_files_only (`bool`, *optional*, defaults to `False`):
If `True`, avoid downloading the file and return the path to the local cached file if it exists.
model_kwargs (`Dict`, *optional*):
Additional kwargs to pass to the model during initialization.
"""
model_id = str(pretrained_model_name_or_path)
config_file: Optional[str] = None
if os.path.isdir(model_id):
if constants.CONFIG_NAME in os.listdir(model_id):
config_file = os.path.join(model_id, constants.CONFIG_NAME)
else:
logger.warning(f"{constants.CONFIG_NAME} not found in {Path(model_id).resolve()}")
else:
try:
config_file = hf_hub_download(
repo_id=model_id,
filename=constants.CONFIG_NAME,
revision=revision,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
token=token,
local_files_only=local_files_only,
)
except HfHubHTTPError as e:
logger.info(f"{constants.CONFIG_NAME} not found on the HuggingFace Hub: {str(e)}")
# Read config
config = None
if config_file is not None:
with open(config_file, "r", encoding="utf-8") as f:
config = json.load(f)
# Decode custom types in config
for key, value in config.items():
if key in cls._hub_mixin_init_parameters:
expected_type = cls._hub_mixin_init_parameters[key].annotation
if expected_type is not inspect.Parameter.empty:
config[key] = cls._decode_arg(expected_type, value)
# Populate model_kwargs from config
for param in cls._hub_mixin_init_parameters.values():
if param.name not in model_kwargs and param.name in config:
model_kwargs[param.name] = config[param.name]
# Check if `config` argument was passed at init
if "config" in cls._hub_mixin_init_parameters and "config" not in model_kwargs:
# Decode `config` argument if it was passed
config_annotation = cls._hub_mixin_init_parameters["config"].annotation
config = cls._decode_arg(config_annotation, config)
# Forward config to model initialization
model_kwargs["config"] = config
# Inject config if `**kwargs` are expected
if is_dataclass(cls):
for key in cls.__dataclass_fields__:
if key not in model_kwargs and key in config:
model_kwargs[key] = config[key]
elif any(param.kind == inspect.Parameter.VAR_KEYWORD for param in cls._hub_mixin_init_parameters.values()):
for key, value in config.items():
if key not in model_kwargs:
model_kwargs[key] = value
# Finally, also inject if `_from_pretrained` expects it
if cls._hub_mixin_inject_config and "config" not in model_kwargs:
model_kwargs["config"] = config
instance = cls._from_pretrained(
model_id=str(model_id),
revision=revision,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
local_files_only=local_files_only,
token=token,
**model_kwargs,
)
# Implicitly set the config as instance attribute if not already set by the class
# This way `config` will be available when calling `save_pretrained` or `push_to_hub`.
if config is not None and (getattr(instance, "_hub_mixin_config", None) in (None, {})):
instance._hub_mixin_config = config
return instance
@classmethod
def _from_pretrained(
cls: Type[T],
*,
model_id: str,
revision: Optional[str],
cache_dir: Optional[Union[str, Path]],
force_download: bool,
proxies: Optional[Dict],
resume_download: Optional[bool],
local_files_only: bool,
token: Optional[Union[str, bool]],
**model_kwargs,
) -> T:
"""Overwrite this method in subclass to define how to load your model from pretrained.
Use [`hf_hub_download`] or [`snapshot_download`] to download files from the Hub before loading them. Most
args taken as input can be directly passed to those 2 methods. If needed, you can add more arguments to this
method using "model_kwargs". For example [`PyTorchModelHubMixin._from_pretrained`] takes as input a `map_location`
parameter to set on which device the model should be loaded.
Check out our [integration guide](../guides/integrations) for more instructions.
Args:
model_id (`str`):
ID of the model to load from the Huggingface Hub (e.g. `bigscience/bloom`).
revision (`str`, *optional*):
Revision of the model on the Hub. Can be a branch name, a git tag or any commit id. Defaults to the
latest commit on `main` branch.
force_download (`bool`, *optional*, defaults to `False`):
Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
the existing cache.
proxies (`Dict[str, str]`, *optional*):
A dictionary of proxy servers to use by protocol or endpoint (e.g., `{'http': 'foo.bar:3128',
'http://hostname': 'foo.bar:4012'}`).
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. By default, it will use the token
cached when running `huggingface-cli login`.
cache_dir (`str`, `Path`, *optional*):
Path to the folder where cached files are stored.
local_files_only (`bool`, *optional*, defaults to `False`):
If `True`, avoid downloading the file and return the path to the local cached file if it exists.
model_kwargs:
Additional keyword arguments passed along to the [`~ModelHubMixin._from_pretrained`] method.
"""
raise NotImplementedError
@validate_hf_hub_args
def push_to_hub(
self,
repo_id: str,
*,
config: Optional[Union[dict, "DataclassInstance"]] = None,
commit_message: str = "Push model using huggingface_hub.",
private: Optional[bool] = None,
token: Optional[str] = None,
branch: Optional[str] = None,
create_pr: Optional[bool] = None,
allow_patterns: Optional[Union[List[str], str]] = None,
ignore_patterns: Optional[Union[List[str], str]] = None,
delete_patterns: Optional[Union[List[str], str]] = None,
model_card_kwargs: Optional[Dict[str, Any]] = None,
) -> str:
"""
Upload model checkpoint to the Hub.
Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
`delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more
details.
Args:
repo_id (`str`):
ID of the repository to push to (example: `"username/my-model"`).
config (`dict` or `DataclassInstance`, *optional*):
Model configuration specified as a key/value dictionary or a dataclass instance.
commit_message (`str`, *optional*):
Message to commit while pushing.
private (`bool`, *optional*):
Whether the repository created should be private.
If `None` (default), the repo will be public unless the organization's default is private.
token (`str`, *optional*):
The token to use as HTTP bearer authorization for remote files. By default, it will use the token
cached when running `huggingface-cli login`.
branch (`str`, *optional*):
The git branch on which to push the model. This defaults to `"main"`.
create_pr (`boolean`, *optional*):
Whether or not to create a Pull Request from `branch` with that commit. Defaults to `False`.
allow_patterns (`List[str]` or `str`, *optional*):
If provided, only files matching at least one pattern are pushed.
ignore_patterns (`List[str]` or `str`, *optional*):
If provided, files matching any of the patterns are not pushed.
delete_patterns (`List[str]` or `str`, *optional*):
If provided, remote files matching any of the patterns will be deleted from the repo.
model_card_kwargs (`Dict[str, Any]`, *optional*):
Additional arguments passed to the model card template to customize the model card.
Returns:
The url of the commit of your model in the given repository.
"""
api = HfApi(token=token)
repo_id = api.create_repo(repo_id=repo_id, private=private, exist_ok=True).repo_id
# Push the files to the repo in a single commit
with SoftTemporaryDirectory() as tmp:
saved_path = Path(tmp) / repo_id
self.save_pretrained(saved_path, config=config, model_card_kwargs=model_card_kwargs)
return api.upload_folder(
repo_id=repo_id,
repo_type="model",
folder_path=saved_path,
commit_message=commit_message,
revision=branch,
create_pr=create_pr,
allow_patterns=allow_patterns,
ignore_patterns=ignore_patterns,
delete_patterns=delete_patterns,
)
def generate_model_card(self, *args, **kwargs) -> ModelCard:
card = ModelCard.from_template(
card_data=self._hub_mixin_info.model_card_data,
template_str=self._hub_mixin_info.model_card_template,
repo_url=self._hub_mixin_info.repo_url,
docs_url=self._hub_mixin_info.docs_url,
**kwargs,
)
return card | class_definition | 2,055 | 31,281 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hub_mixin.py | null | 18 |
class PyTorchModelHubMixin(ModelHubMixin):
"""
Implementation of [`ModelHubMixin`] to provide model Hub upload/download capabilities to PyTorch models. The model
is set in evaluation mode by default using `model.eval()` (dropout modules are deactivated). To train the model,
you should first set it back in training mode with `model.train()`.
See [`ModelHubMixin`] for more details on how to use the mixin.
Example:
```python
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin
>>> class MyModel(
... nn.Module,
... PyTorchModelHubMixin,
... library_name="keras-nlp",
... repo_url="https://github.com/keras-team/keras-nlp",
... docs_url="https://keras.io/keras_nlp/",
... # ^ optional metadata to generate model card
... ):
... def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
... super().__init__()
... self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
... self.linear = nn.Linear(output_size, vocab_size)
... def forward(self, x):
... return self.linear(x + self.param)
>>> model = MyModel(hidden_size=256)
# Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")
# Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")
# Download and initialize weights from the Hub
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model.hidden_size
256
```
"""
def __init_subclass__(cls, *args, tags: Optional[List[str]] = None, **kwargs) -> None:
tags = tags or []
tags.append("pytorch_model_hub_mixin")
kwargs["tags"] = tags
return super().__init_subclass__(*args, **kwargs)
def _save_pretrained(self, save_directory: Path) -> None:
"""Save weights from a Pytorch model to a local directory."""
model_to_save = self.module if hasattr(self, "module") else self # type: ignore
save_model_as_safetensor(model_to_save, str(save_directory / constants.SAFETENSORS_SINGLE_FILE))
@classmethod
def _from_pretrained(
cls,
*,
model_id: str,
revision: Optional[str],
cache_dir: Optional[Union[str, Path]],
force_download: bool,
proxies: Optional[Dict],
resume_download: Optional[bool],
local_files_only: bool,
token: Union[str, bool, None],
map_location: str = "cpu",
strict: bool = False,
**model_kwargs,
):
"""Load Pytorch pretrained weights and return the loaded model."""
model = cls(**model_kwargs)
if os.path.isdir(model_id):
print("Loading weights from local directory")
model_file = os.path.join(model_id, constants.SAFETENSORS_SINGLE_FILE)
return cls._load_as_safetensor(model, model_file, map_location, strict)
else:
try:
model_file = hf_hub_download(
repo_id=model_id,
filename=constants.SAFETENSORS_SINGLE_FILE,
revision=revision,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
token=token,
local_files_only=local_files_only,
)
return cls._load_as_safetensor(model, model_file, map_location, strict)
except EntryNotFoundError:
model_file = hf_hub_download(
repo_id=model_id,
filename=constants.PYTORCH_WEIGHTS_NAME,
revision=revision,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
token=token,
local_files_only=local_files_only,
)
return cls._load_as_pickle(model, model_file, map_location, strict)
@classmethod
def _load_as_pickle(cls, model: T, model_file: str, map_location: str, strict: bool) -> T:
state_dict = torch.load(model_file, map_location=torch.device(map_location), weights_only=True)
model.load_state_dict(state_dict, strict=strict) # type: ignore
model.eval() # type: ignore
return model
@classmethod
def _load_as_safetensor(cls, model: T, model_file: str, map_location: str, strict: bool) -> T:
if packaging.version.parse(safetensors.__version__) < packaging.version.parse("0.4.3"): # type: ignore [attr-defined]
load_model_as_safetensor(model, model_file, strict=strict) # type: ignore [arg-type]
if map_location != "cpu":
logger.warning(
"Loading model weights on other devices than 'cpu' is not supported natively in your version of safetensors."
" This means that the model is loaded on 'cpu' first and then copied to the device."
" This leads to a slower loading time."
" Please update safetensors to version 0.4.3 or above for improved performance."
)
model.to(map_location) # type: ignore [attr-defined]
else:
safetensors.torch.load_model(model, model_file, strict=strict, device=map_location) # type: ignore [arg-type]
return model | class_definition | 31,284 | 36,922 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hub_mixin.py | null | 19 |
class WebhooksServer:
"""
The [`WebhooksServer`] class lets you create an instance of a Gradio app that can receive Huggingface webhooks.
These webhooks can be registered using the [`~WebhooksServer.add_webhook`] decorator. Webhook endpoints are added to
the app as a POST endpoint to the FastAPI router. Once all the webhooks are registered, the `launch` method has to be
called to start the app.
It is recommended to accept [`WebhookPayload`] as the first argument of the webhook function. It is a Pydantic
model that contains all the information about the webhook event. The data will be parsed automatically for you.
Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your
WebhooksServer and deploy it on a Space.
<Tip warning={true}>
`WebhooksServer` is experimental. Its API is subject to change in the future.
</Tip>
<Tip warning={true}>
You must have `gradio` installed to use `WebhooksServer` (`pip install --upgrade gradio`).
</Tip>
Args:
ui (`gradio.Blocks`, optional):
A Gradio UI instance to be used as the Space landing page. If `None`, a UI displaying instructions
about the configured webhooks is created.
webhook_secret (`str`, optional):
A secret key to verify incoming webhook requests. You can set this value to any secret you want as long as
you also configure it in your [webhooks settings panel](https://huggingface.co/settings/webhooks). You
can also set this value as the `WEBHOOK_SECRET` environment variable. If no secret is provided, the
webhook endpoints are opened without any security.
Example:
```python
import gradio as gr
from huggingface_hub import WebhooksServer, WebhookPayload
with gr.Blocks() as ui:
...
app = WebhooksServer(ui=ui, webhook_secret="my_secret_key")
@app.add_webhook("/say_hello")
async def hello(payload: WebhookPayload):
return {"message": "hello"}
app.launch()
```
"""
def __new__(cls, *args, **kwargs) -> "WebhooksServer":
if not is_gradio_available():
raise ImportError(
"You must have `gradio` installed to use `WebhooksServer`. Please run `pip install --upgrade gradio`"
" first."
)
if not is_fastapi_available():
raise ImportError(
"You must have `fastapi` installed to use `WebhooksServer`. Please run `pip install --upgrade fastapi`"
" first."
)
return super().__new__(cls)
def __init__(
self,
ui: Optional["gr.Blocks"] = None,
webhook_secret: Optional[str] = None,
) -> None:
self._ui = ui
self.webhook_secret = webhook_secret or os.getenv("WEBHOOK_SECRET")
self.registered_webhooks: Dict[str, Callable] = {}
_warn_on_empty_secret(self.webhook_secret)
def add_webhook(self, path: Optional[str] = None) -> Callable:
"""
Decorator to add a webhook to the [`WebhooksServer`] server.
Args:
path (`str`, optional):
The URL path to register the webhook function. If not provided, the function name will be used as the
path. In any case, all webhooks are registered under `/webhooks`.
Raises:
ValueError: If the provided path is already registered as a webhook.
Example:
```python
from huggingface_hub import WebhooksServer, WebhookPayload
app = WebhooksServer()
@app.add_webhook
async def trigger_training(payload: WebhookPayload):
if payload.repo.type == "dataset" and payload.event.action == "update":
# Trigger a training job if a dataset is updated
...
app.launch()
```
"""
# Usage: directly as decorator. Example: `@app.add_webhook`
if callable(path):
# If path is a function, it means it was used as a decorator without arguments
return self.add_webhook()(path)
# Usage: provide a path. Example: `@app.add_webhook(...)`
@wraps(FastAPI.post)
def _inner_post(*args, **kwargs):
func = args[0]
abs_path = f"/webhooks/{(path or func.__name__).strip('/')}"
if abs_path in self.registered_webhooks:
raise ValueError(f"Webhook {abs_path} already exists.")
self.registered_webhooks[abs_path] = func
return _inner_post
def launch(self, prevent_thread_lock: bool = False, **launch_kwargs: Any) -> None:
"""Launch the Gradio app and register webhooks to the underlying FastAPI server.
Input parameters are forwarded to Gradio when launching the app.
"""
ui = self._ui or self._get_default_ui()
# Start Gradio App
# - as non-blocking so that webhooks can be added afterwards
# - as shared if launch locally (to debug webhooks)
launch_kwargs.setdefault("share", _is_local)
self.fastapi_app, _, _ = ui.launch(prevent_thread_lock=True, **launch_kwargs)
# Register webhooks to FastAPI app
for path, func in self.registered_webhooks.items():
# Add secret check if required
if self.webhook_secret is not None:
func = _wrap_webhook_to_check_secret(func, webhook_secret=self.webhook_secret)
# Add route to FastAPI app
self.fastapi_app.post(path)(func)
# Print instructions and block main thread
space_host = os.environ.get("SPACE_HOST")
url = "https://" + space_host if space_host is not None else (ui.share_url or ui.local_url)
url = url.strip("/")
message = "\nWebhooks are correctly setup and ready to use:"
message += "\n" + "\n".join(f" - POST {url}{webhook}" for webhook in self.registered_webhooks)
message += "\nGo to https://huggingface.co/settings/webhooks to setup your webhooks."
print(message)
if not prevent_thread_lock:
ui.block_thread()
def _get_default_ui(self) -> "gr.Blocks":
"""Default UI if not provided (lists webhooks and provides basic instructions)."""
import gradio as gr
with gr.Blocks() as ui:
gr.Markdown("# This is an app to process 🤗 Webhooks")
gr.Markdown(
"Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on"
" specific repos or to all repos belonging to particular set of users/organizations (not just your"
" repos, but any repo). Check out this [guide](https://huggingface.co/docs/hub/webhooks) to get to"
" know more about webhooks on the Huggingface Hub."
)
gr.Markdown(
f"{len(self.registered_webhooks)} webhook(s) are registered:"
+ "\n\n"
+ "\n ".join(
f"- [{webhook_path}]({_get_webhook_doc_url(webhook.__name__, webhook_path)})"
for webhook_path, webhook in self.registered_webhooks.items()
)
)
gr.Markdown(
"Go to https://huggingface.co/settings/webhooks to setup your webhooks."
+ "\nYou app is running locally. Please look at the logs to check the full URL you need to set."
if _is_local
else (
"\nThis app is running on a Space. You can find the corresponding URL in the options menu"
" (top-right) > 'Embed the Space'. The URL looks like 'https://{username}-{repo_name}.hf.space'."
)
)
return ui | class_definition | 1,356 | 9,272 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_webhooks_server.py | null | 20 |
class SpaceStage(str, Enum):
"""
Enumeration of possible stage of a Space on the Hub.
Value can be compared to a string:
```py
assert SpaceStage.BUILDING == "BUILDING"
```
Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L61 (private url).
"""
# Copied from moon-landing > server > repo_types > SpaceInfo.ts (private repo)
NO_APP_FILE = "NO_APP_FILE"
CONFIG_ERROR = "CONFIG_ERROR"
BUILDING = "BUILDING"
BUILD_ERROR = "BUILD_ERROR"
RUNNING = "RUNNING"
RUNNING_BUILDING = "RUNNING_BUILDING"
RUNTIME_ERROR = "RUNTIME_ERROR"
DELETING = "DELETING"
STOPPED = "STOPPED"
PAUSED = "PAUSED" | class_definition | 786 | 1,492 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_space_api.py | null | 21 |
class SpaceHardware(str, Enum):
"""
Enumeration of hardwares available to run your Space on the Hub.
Value can be compared to a string:
```py
assert SpaceHardware.CPU_BASIC == "cpu-basic"
```
Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L73 (private url).
"""
CPU_BASIC = "cpu-basic"
CPU_UPGRADE = "cpu-upgrade"
T4_SMALL = "t4-small"
T4_MEDIUM = "t4-medium"
L4X1 = "l4x1"
L4X4 = "l4x4"
ZERO_A10G = "zero-a10g"
A10G_SMALL = "a10g-small"
A10G_LARGE = "a10g-large"
A10G_LARGEX2 = "a10g-largex2"
A10G_LARGEX4 = "a10g-largex4"
A100_LARGE = "a100-large"
V5E_1X1 = "v5e-1x1"
V5E_2X2 = "v5e-2x2"
V5E_2X4 = "v5e-2x4" | class_definition | 1,495 | 2,248 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_space_api.py | null | 22 |
class SpaceStorage(str, Enum):
"""
Enumeration of persistent storage available for your Space on the Hub.
Value can be compared to a string:
```py
assert SpaceStorage.SMALL == "small"
```
Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceHardwareFlavor.ts#L24 (private url).
"""
SMALL = "small"
MEDIUM = "medium"
LARGE = "large" | class_definition | 2,251 | 2,664 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_space_api.py | null | 23 |
class SpaceRuntime:
"""
Contains information about the current runtime of a Space.
Args:
stage (`str`):
Current stage of the space. Example: RUNNING.
hardware (`str` or `None`):
Current hardware of the space. Example: "cpu-basic". Can be `None` if Space
is `BUILDING` for the first time.
requested_hardware (`str` or `None`):
Requested hardware. Can be different than `hardware` especially if the request
has just been made. Example: "t4-medium". Can be `None` if no hardware has
been requested yet.
sleep_time (`int` or `None`):
Number of seconds the Space will be kept alive after the last request. By default (if value is `None`), the
Space will never go to sleep if it's running on an upgraded hardware, while it will go to sleep after 48
hours on a free 'cpu-basic' hardware. For more details, see https://huggingface.co/docs/hub/spaces-gpus#sleep-time.
raw (`dict`):
Raw response from the server. Contains more information about the Space
runtime like number of replicas, number of cpu, memory size,...
"""
stage: SpaceStage
hardware: Optional[SpaceHardware]
requested_hardware: Optional[SpaceHardware]
sleep_time: Optional[int]
storage: Optional[SpaceStorage]
raw: Dict
def __init__(self, data: Dict) -> None:
self.stage = data["stage"]
self.hardware = data.get("hardware", {}).get("current")
self.requested_hardware = data.get("hardware", {}).get("requested")
self.sleep_time = data.get("gcTimeout")
self.storage = data.get("storage")
self.raw = data | class_definition | 2,678 | 4,403 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_space_api.py | null | 24 |
class SpaceVariable:
"""
Contains information about the current variables of a Space.
Args:
key (`str`):
Variable key. Example: `"MODEL_REPO_ID"`
value (`str`):
Variable value. Example: `"the_model_repo_id"`.
description (`str` or None):
Description of the variable. Example: `"Model Repo ID of the implemented model"`.
updatedAt (`datetime` or None):
datetime of the last update of the variable (if the variable has been updated at least once).
"""
key: str
value: str
description: Optional[str]
updated_at: Optional[datetime]
def __init__(self, key: str, values: Dict) -> None:
self.key = key
self.value = values["value"]
self.description = values.get("description")
updated_at = values.get("updatedAt")
self.updated_at = parse_datetime(updated_at) if updated_at is not None else None | class_definition | 4,417 | 5,362 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_space_api.py | null | 25 |
class KerasModelHubMixin(ModelHubMixin):
"""
Implementation of [`ModelHubMixin`] to provide model Hub upload/download
capabilities to Keras models.
```python
>>> import tensorflow as tf
>>> from huggingface_hub import KerasModelHubMixin
>>> class MyModel(tf.keras.Model, KerasModelHubMixin):
... def __init__(self, **kwargs):
... super().__init__()
... self.config = kwargs.pop("config", None)
... self.dummy_inputs = ...
... self.layer = ...
... def call(self, *args):
... return ...
>>> # Initialize and compile the model as you normally would
>>> model = MyModel()
>>> model.compile(...)
>>> # Build the graph by training it or passing dummy inputs
>>> _ = model(model.dummy_inputs)
>>> # Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")
>>> # Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")
>>> # Download and initialize weights from the Hub
>>> model = MyModel.from_pretrained("username/super-cool-model")
```
"""
def _save_pretrained(self, save_directory):
save_pretrained_keras(self, save_directory)
@classmethod
def _from_pretrained(
cls,
model_id,
revision,
cache_dir,
force_download,
proxies,
resume_download,
local_files_only,
token,
config: Optional[Dict[str, Any]] = None,
**model_kwargs,
):
"""Here we just call [`from_pretrained_keras`] function so both the mixin and
functional APIs stay in sync.
TODO - Some args above aren't used since we are calling
snapshot_download instead of hf_hub_download.
"""
if keras is None:
raise ImportError("Called a TensorFlow-specific function but could not import it.")
# Root is either a local filepath matching model_id or a cached snapshot
if not os.path.isdir(model_id):
storage_folder = snapshot_download(
repo_id=model_id,
revision=revision,
cache_dir=cache_dir,
library_name="keras",
library_version=get_tf_version(),
)
else:
storage_folder = model_id
# TODO: change this in a future PR. We are not returning a KerasModelHubMixin instance here...
model = keras.models.load_model(storage_folder)
# For now, we add a new attribute, config, to store the config loaded from the hub/a local dir.
model.config = config
return model | class_definition | 16,893 | 19,573 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/keras_mixin.py | null | 26 |
class _FileToUpload:
"""Temporary dataclass to store info about files to upload. Not meant to be used directly."""
local_path: Path
path_in_repo: str
size_limit: int
last_modified: float | class_definition | 461 | 668 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_commit_scheduler.py | null | 27 |
class CommitScheduler:
"""
Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes).
The recommended way to use the scheduler is to use it as a context manager. This ensures that the scheduler is
properly stopped and the last commit is triggered when the script ends. The scheduler can also be stopped manually
with the `stop` method. Checkout the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#scheduled-uploads)
to learn more about how to use it.
Args:
repo_id (`str`):
The id of the repo to commit to.
folder_path (`str` or `Path`):
Path to the local folder to upload regularly.
every (`int` or `float`, *optional*):
The number of minutes between each commit. Defaults to 5 minutes.
path_in_repo (`str`, *optional*):
Relative path of the directory in the repo, for example: `"checkpoints/"`. Defaults to the root folder
of the repository.
repo_type (`str`, *optional*):
The type of the repo to commit to. Defaults to `model`.
revision (`str`, *optional*):
The revision of the repo to commit to. Defaults to `main`.
private (`bool`, *optional*):
Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
token (`str`, *optional*):
The token to use to commit to the repo. Defaults to the token saved on the machine.
allow_patterns (`List[str]` or `str`, *optional*):
If provided, only files matching at least one pattern are uploaded.
ignore_patterns (`List[str]` or `str`, *optional*):
If provided, files matching any of the patterns are not uploaded.
squash_history (`bool`, *optional*):
Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is
useful to avoid degraded performances on the repo when it grows too large.
hf_api (`HfApi`, *optional*):
The [`HfApi`] client to use to commit to the Hub. Can be set with custom settings (user agent, token,...).
Example:
```py
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler
# Scheduler uploads every 10 minutes
>>> csv_path = Path("watched_folder/data.csv")
>>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10)
>>> with csv_path.open("a") as f:
... f.write("first line")
# Some time later (...)
>>> with csv_path.open("a") as f:
... f.write("second line")
```
Example using a context manager:
```py
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler
>>> with CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path="watched_folder", every=10) as scheduler:
... csv_path = Path("watched_folder/data.csv")
... with csv_path.open("a") as f:
... f.write("first line")
... (...)
... with csv_path.open("a") as f:
... f.write("second line")
# Scheduler is now stopped and last commit have been triggered
```
"""
def __init__(
self,
*,
repo_id: str,
folder_path: Union[str, Path],
every: Union[int, float] = 5,
path_in_repo: Optional[str] = None,
repo_type: Optional[str] = None,
revision: Optional[str] = None,
private: Optional[bool] = None,
token: Optional[str] = None,
allow_patterns: Optional[Union[List[str], str]] = None,
ignore_patterns: Optional[Union[List[str], str]] = None,
squash_history: bool = False,
hf_api: Optional["HfApi"] = None,
) -> None:
self.api = hf_api or HfApi(token=token)
# Folder
self.folder_path = Path(folder_path).expanduser().resolve()
self.path_in_repo = path_in_repo or ""
self.allow_patterns = allow_patterns
if ignore_patterns is None:
ignore_patterns = []
elif isinstance(ignore_patterns, str):
ignore_patterns = [ignore_patterns]
self.ignore_patterns = ignore_patterns + DEFAULT_IGNORE_PATTERNS
if self.folder_path.is_file():
raise ValueError(f"'folder_path' must be a directory, not a file: '{self.folder_path}'.")
self.folder_path.mkdir(parents=True, exist_ok=True)
# Repository
repo_url = self.api.create_repo(repo_id=repo_id, private=private, repo_type=repo_type, exist_ok=True)
self.repo_id = repo_url.repo_id
self.repo_type = repo_type
self.revision = revision
self.token = token
# Keep track of already uploaded files
self.last_uploaded: Dict[Path, float] = {} # key is local path, value is timestamp
# Scheduler
if not every > 0:
raise ValueError(f"'every' must be a positive integer, not '{every}'.")
self.lock = Lock()
self.every = every
self.squash_history = squash_history
logger.info(f"Scheduled job to push '{self.folder_path}' to '{self.repo_id}' every {self.every} minutes.")
self._scheduler_thread = Thread(target=self._run_scheduler, daemon=True)
self._scheduler_thread.start()
atexit.register(self._push_to_hub)
self.__stopped = False
def stop(self) -> None:
"""Stop the scheduler.
A stopped scheduler cannot be restarted. Mostly for tests purposes.
"""
self.__stopped = True
def __enter__(self) -> "CommitScheduler":
return self
def __exit__(self, exc_type, exc_value, traceback) -> None:
# Upload last changes before exiting
self.trigger().result()
self.stop()
return
def _run_scheduler(self) -> None:
"""Dumb thread waiting between each scheduled push to Hub."""
while True:
self.last_future = self.trigger()
time.sleep(self.every * 60)
if self.__stopped:
break
def trigger(self) -> Future:
"""Trigger a `push_to_hub` and return a future.
This method is automatically called every `every` minutes. You can also call it manually to trigger a commit
immediately, without waiting for the next scheduled commit.
"""
return self.api.run_as_future(self._push_to_hub)
def _push_to_hub(self) -> Optional[CommitInfo]:
if self.__stopped: # If stopped, already scheduled commits are ignored
return None
logger.info("(Background) scheduled commit triggered.")
try:
value = self.push_to_hub()
if self.squash_history:
logger.info("(Background) squashing repo history.")
self.api.super_squash_history(repo_id=self.repo_id, repo_type=self.repo_type, branch=self.revision)
return value
except Exception as e:
logger.error(f"Error while pushing to Hub: {e}") # Depending on the setup, error might be silenced
raise
def push_to_hub(self) -> Optional[CommitInfo]:
"""
Push folder to the Hub and return the commit info.
<Tip warning={true}>
This method is not meant to be called directly. It is run in the background by the scheduler, respecting a
queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency
issues.
</Tip>
The default behavior of `push_to_hub` is to assume an append-only folder. It lists all files in the folder and
uploads only changed files. If no changes are found, the method returns without committing anything. If you want
to change this behavior, you can inherit from [`CommitScheduler`] and override this method. This can be useful
for example to compress data together in a single file before committing. For more details and examples, check
out our [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads).
"""
# Check files to upload (with lock)
with self.lock:
logger.debug("Listing files to upload for scheduled commit.")
# List files from folder (taken from `_prepare_upload_folder_additions`)
relpath_to_abspath = {
path.relative_to(self.folder_path).as_posix(): path
for path in sorted(self.folder_path.glob("**/*")) # sorted to be deterministic
if path.is_file()
}
prefix = f"{self.path_in_repo.strip('/')}/" if self.path_in_repo else ""
# Filter with pattern + filter out unchanged files + retrieve current file size
files_to_upload: List[_FileToUpload] = []
for relpath in filter_repo_objects(
relpath_to_abspath.keys(), allow_patterns=self.allow_patterns, ignore_patterns=self.ignore_patterns
):
local_path = relpath_to_abspath[relpath]
stat = local_path.stat()
if self.last_uploaded.get(local_path) is None or self.last_uploaded[local_path] != stat.st_mtime:
files_to_upload.append(
_FileToUpload(
local_path=local_path,
path_in_repo=prefix + relpath,
size_limit=stat.st_size,
last_modified=stat.st_mtime,
)
)
# Return if nothing to upload
if len(files_to_upload) == 0:
logger.debug("Dropping schedule commit: no changed file to upload.")
return None
# Convert `_FileToUpload` as `CommitOperationAdd` (=> compute file shas + limit to file size)
logger.debug("Removing unchanged files since previous scheduled commit.")
add_operations = [
CommitOperationAdd(
# Cap the file to its current size, even if the user append data to it while a scheduled commit is happening
path_or_fileobj=PartialFileIO(file_to_upload.local_path, size_limit=file_to_upload.size_limit),
path_in_repo=file_to_upload.path_in_repo,
)
for file_to_upload in files_to_upload
]
# Upload files (append mode expected - no need for lock)
logger.debug("Uploading files for scheduled commit.")
commit_info = self.api.create_commit(
repo_id=self.repo_id,
repo_type=self.repo_type,
operations=add_operations,
commit_message="Scheduled Commit",
revision=self.revision,
)
# Successful commit: keep track of the latest "last_modified" for each file
for file in files_to_upload:
self.last_uploaded[file.local_path] = file.last_modified
return commit_info | class_definition | 671 | 11,802 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_commit_scheduler.py | null | 28 |
class PartialFileIO(BytesIO):
"""A file-like object that reads only the first part of a file.
Useful to upload a file to the Hub when the user might still be appending data to it. Only the first part of the
file is uploaded (i.e. the part that was available when the filesystem was first scanned).
In practice, only used internally by the CommitScheduler to regularly push a folder to the Hub with minimal
disturbance for the user. The object is passed to `CommitOperationAdd`.
Only supports `read`, `tell` and `seek` methods.
Args:
file_path (`str` or `Path`):
Path to the file to read.
size_limit (`int`):
The maximum number of bytes to read from the file. If the file is larger than this, only the first part
will be read (and uploaded).
"""
def __init__(self, file_path: Union[str, Path], size_limit: int) -> None:
self._file_path = Path(file_path)
self._file = self._file_path.open("rb")
self._size_limit = min(size_limit, os.fstat(self._file.fileno()).st_size)
def __del__(self) -> None:
self._file.close()
return super().__del__()
def __repr__(self) -> str:
return f"<PartialFileIO file_path={self._file_path} size_limit={self._size_limit}>"
def __len__(self) -> int:
return self._size_limit
def __getattribute__(self, name: str):
if name.startswith("_") or name in ("read", "tell", "seek"): # only 3 public methods supported
return super().__getattribute__(name)
raise NotImplementedError(f"PartialFileIO does not support '{name}'.")
def tell(self) -> int:
"""Return the current file position."""
return self._file.tell()
def seek(self, __offset: int, __whence: int = SEEK_SET) -> int:
"""Change the stream position to the given offset.
Behavior is the same as a regular file, except that the position is capped to the size limit.
"""
if __whence == SEEK_END:
# SEEK_END => set from the truncated end
__offset = len(self) + __offset
__whence = SEEK_SET
pos = self._file.seek(__offset, __whence)
if pos > self._size_limit:
return self._file.seek(self._size_limit)
return pos
def read(self, __size: Optional[int] = -1) -> bytes:
"""Read at most `__size` bytes from the file.
Behavior is the same as a regular file, except that it is capped to the size limit.
"""
current = self._file.tell()
if __size is None or __size < 0:
# Read until file limit
truncated_size = self._size_limit - current
else:
# Read until file limit or __size
truncated_size = min(__size, self._size_limit - current)
return self._file.read(truncated_size) | class_definition | 11,805 | 14,678 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_commit_scheduler.py | null | 29 |
class LocalDownloadFilePaths:
"""
Paths to the files related to a download process in a local dir.
Returned by [`get_local_download_paths`].
Attributes:
file_path (`Path`):
Path where the file will be saved.
lock_path (`Path`):
Path to the lock file used to ensure atomicity when reading/writing metadata.
metadata_path (`Path`):
Path to the metadata file.
"""
file_path: Path
lock_path: Path
metadata_path: Path
def incomplete_path(self, etag: str) -> Path:
"""Return the path where a file will be temporarily downloaded before being moved to `file_path`."""
return self.metadata_path.with_suffix(f".{etag}.incomplete") | class_definition | 1,891 | 2,627 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_local_folder.py | null | 30 |
class LocalUploadFilePaths:
"""
Paths to the files related to an upload process in a local dir.
Returned by [`get_local_upload_paths`].
Attributes:
path_in_repo (`str`):
Path of the file in the repo.
file_path (`Path`):
Path where the file will be saved.
lock_path (`Path`):
Path to the lock file used to ensure atomicity when reading/writing metadata.
metadata_path (`Path`):
Path to the metadata file.
"""
path_in_repo: str
file_path: Path
lock_path: Path
metadata_path: Path | class_definition | 2,654 | 3,250 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_local_folder.py | null | 31 |
class LocalDownloadFileMetadata:
"""
Metadata about a file in the local directory related to a download process.
Attributes:
filename (`str`):
Path of the file in the repo.
commit_hash (`str`):
Commit hash of the file in the repo.
etag (`str`):
ETag of the file in the repo. Used to check if the file has changed.
For LFS files, this is the sha256 of the file. For regular files, it corresponds to the git hash.
timestamp (`int`):
Unix timestamp of when the metadata was saved i.e. when the metadata was accurate.
"""
filename: str
commit_hash: str
etag: str
timestamp: float | class_definition | 3,264 | 3,965 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_local_folder.py | null | 32 |
class LocalUploadFileMetadata:
"""
Metadata about a file in the local directory related to an upload process.
"""
size: int
# Default values correspond to "we don't know yet"
timestamp: Optional[float] = None
should_ignore: Optional[bool] = None
sha256: Optional[str] = None
upload_mode: Optional[str] = None
is_uploaded: bool = False
is_committed: bool = False
def save(self, paths: LocalUploadFilePaths) -> None:
"""Save the metadata to disk."""
with WeakFileLock(paths.lock_path):
with paths.metadata_path.open("w") as f:
new_timestamp = time.time()
f.write(str(new_timestamp) + "\n")
f.write(str(self.size)) # never None
f.write("\n")
if self.should_ignore is not None:
f.write(str(int(self.should_ignore)))
f.write("\n")
if self.sha256 is not None:
f.write(self.sha256)
f.write("\n")
if self.upload_mode is not None:
f.write(self.upload_mode)
f.write("\n")
f.write(str(int(self.is_uploaded)) + "\n")
f.write(str(int(self.is_committed)) + "\n")
self.timestamp = new_timestamp | class_definition | 3,979 | 5,308 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_local_folder.py | null | 33 |
class UploadInfo:
"""
Dataclass holding required information to determine whether a blob
should be uploaded to the hub using the LFS protocol or the regular protocol
Args:
sha256 (`bytes`):
SHA256 hash of the blob
size (`int`):
Size in bytes of the blob
sample (`bytes`):
First 512 bytes of the blob
"""
sha256: bytes
size: int
sample: bytes
@classmethod
def from_path(cls, path: str):
size = getsize(path)
with io.open(path, "rb") as file:
sample = file.peek(512)[:512]
sha = sha_fileobj(file)
return cls(size=size, sha256=sha, sample=sample)
@classmethod
def from_bytes(cls, data: bytes):
sha = sha256(data).digest()
return cls(size=len(data), sample=data[:512], sha256=sha)
@classmethod
def from_fileobj(cls, fileobj: BinaryIO):
sample = fileobj.read(512)
fileobj.seek(0, io.SEEK_SET)
sha = sha_fileobj(fileobj)
size = fileobj.tell()
fileobj.seek(0, io.SEEK_SET)
return cls(size=size, sha256=sha, sample=sample) | class_definition | 1,632 | 2,779 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/lfs.py | null | 34 |
class PayloadPartT(TypedDict):
partNumber: int
etag: str | class_definition | 5,640 | 5,704 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/lfs.py | null | 35 |
class CompletionPayloadT(TypedDict):
"""Payload that will be sent to the Hub when uploading multi-part."""
oid: str
parts: List[PayloadPartT] | class_definition | 5,707 | 5,861 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/lfs.py | null | 36 |
class HfFileMetadata:
"""Data structure containing information about a file versioned on the Hub.
Returned by [`get_hf_file_metadata`] based on a URL.
Args:
commit_hash (`str`, *optional*):
The commit_hash related to the file.
etag (`str`, *optional*):
Etag of the file on the server.
location (`str`):
Location where to download the file. Can be a Hub url or not (CDN).
size (`size`):
Size of the file. In case of an LFS file, contains the size of the actual
LFS file, not the pointer.
"""
commit_hash: Optional[str]
etag: Optional[str]
location: str
size: Optional[int] | class_definition | 5,777 | 6,475 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/file_download.py | null | 37 |
class Discussion:
"""
A Discussion or Pull Request on the Hub.
This dataclass is not intended to be instantiated directly.
Attributes:
title (`str`):
The title of the Discussion / Pull Request
status (`str`):
The status of the Discussion / Pull Request.
It must be one of:
* `"open"`
* `"closed"`
* `"merged"` (only for Pull Requests )
* `"draft"` (only for Pull Requests )
num (`int`):
The number of the Discussion / Pull Request.
repo_id (`str`):
The id (`"{namespace}/{repo_name}"`) of the repo on which
the Discussion / Pull Request was open.
repo_type (`str`):
The type of the repo on which the Discussion / Pull Request was open.
Possible values are: `"model"`, `"dataset"`, `"space"`.
author (`str`):
The username of the Discussion / Pull Request author.
Can be `"deleted"` if the user has been deleted since.
is_pull_request (`bool`):
Whether or not this is a Pull Request.
created_at (`datetime`):
The `datetime` of creation of the Discussion / Pull Request.
endpoint (`str`):
Endpoint of the Hub. Default is https://huggingface.co.
git_reference (`str`, *optional*):
(property) Git reference to which changes can be pushed if this is a Pull Request, `None` otherwise.
url (`str`):
(property) URL of the discussion on the Hub.
"""
title: str
status: DiscussionStatus
num: int
repo_id: str
repo_type: str
author: str
is_pull_request: bool
created_at: datetime
endpoint: str
@property
def git_reference(self) -> Optional[str]:
"""
If this is a Pull Request , returns the git reference to which changes can be pushed.
Returns `None` otherwise.
"""
if self.is_pull_request:
return f"refs/pr/{self.num}"
return None
@property
def url(self) -> str:
"""Returns the URL of the discussion on the Hub."""
if self.repo_type is None or self.repo_type == constants.REPO_TYPE_MODEL:
return f"{self.endpoint}/{self.repo_id}/discussions/{self.num}"
return f"{self.endpoint}/{self.repo_type}s/{self.repo_id}/discussions/{self.num}" | class_definition | 530 | 2,958 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/community.py | null | 38 |
class DiscussionWithDetails(Discussion):
"""
Subclass of [`Discussion`].
Attributes:
title (`str`):
The title of the Discussion / Pull Request
status (`str`):
The status of the Discussion / Pull Request.
It can be one of:
* `"open"`
* `"closed"`
* `"merged"` (only for Pull Requests )
* `"draft"` (only for Pull Requests )
num (`int`):
The number of the Discussion / Pull Request.
repo_id (`str`):
The id (`"{namespace}/{repo_name}"`) of the repo on which
the Discussion / Pull Request was open.
repo_type (`str`):
The type of the repo on which the Discussion / Pull Request was open.
Possible values are: `"model"`, `"dataset"`, `"space"`.
author (`str`):
The username of the Discussion / Pull Request author.
Can be `"deleted"` if the user has been deleted since.
is_pull_request (`bool`):
Whether or not this is a Pull Request.
created_at (`datetime`):
The `datetime` of creation of the Discussion / Pull Request.
events (`list` of [`DiscussionEvent`])
The list of [`DiscussionEvents`] in this Discussion or Pull Request.
conflicting_files (`Union[List[str], bool, None]`, *optional*):
A list of conflicting files if this is a Pull Request.
`None` if `self.is_pull_request` is `False`.
`True` if there are conflicting files but the list can't be retrieved.
target_branch (`str`, *optional*):
The branch into which changes are to be merged if this is a
Pull Request . `None` if `self.is_pull_request` is `False`.
merge_commit_oid (`str`, *optional*):
If this is a merged Pull Request , this is set to the OID / SHA of
the merge commit, `None` otherwise.
diff (`str`, *optional*):
The git diff if this is a Pull Request , `None` otherwise.
endpoint (`str`):
Endpoint of the Hub. Default is https://huggingface.co.
git_reference (`str`, *optional*):
(property) Git reference to which changes can be pushed if this is a Pull Request, `None` otherwise.
url (`str`):
(property) URL of the discussion on the Hub.
"""
events: List["DiscussionEvent"]
conflicting_files: Union[List[str], bool, None]
target_branch: Optional[str]
merge_commit_oid: Optional[str]
diff: Optional[str] | class_definition | 2,972 | 5,564 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/community.py | null | 39 |
class DiscussionEvent:
"""
An event in a Discussion or Pull Request.
Use concrete classes:
* [`DiscussionComment`]
* [`DiscussionStatusChange`]
* [`DiscussionCommit`]
* [`DiscussionTitleChange`]
Attributes:
id (`str`):
The ID of the event. An hexadecimal string.
type (`str`):
The type of the event.
created_at (`datetime`):
A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
object holding the creation timestamp for the event.
author (`str`):
The username of the Discussion / Pull Request author.
Can be `"deleted"` if the user has been deleted since.
"""
id: str
type: str
created_at: datetime
author: str
_event: dict
"""Stores the original event data, in case we need to access it later.""" | class_definition | 5,578 | 6,507 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/community.py | null | 40 |
class DiscussionComment(DiscussionEvent):
"""A comment in a Discussion / Pull Request.
Subclass of [`DiscussionEvent`].
Attributes:
id (`str`):
The ID of the event. An hexadecimal string.
type (`str`):
The type of the event.
created_at (`datetime`):
A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
object holding the creation timestamp for the event.
author (`str`):
The username of the Discussion / Pull Request author.
Can be `"deleted"` if the user has been deleted since.
content (`str`):
The raw markdown content of the comment. Mentions, links and images are not rendered.
edited (`bool`):
Whether or not this comment has been edited.
hidden (`bool`):
Whether or not this comment has been hidden.
"""
content: str
edited: bool
hidden: bool
@property
def rendered(self) -> str:
"""The rendered comment, as a HTML string"""
return self._event["data"]["latest"]["html"]
@property
def last_edited_at(self) -> datetime:
"""The last edit time, as a `datetime` object."""
return parse_datetime(self._event["data"]["latest"]["updatedAt"])
@property
def last_edited_by(self) -> str:
"""The last edit time, as a `datetime` object."""
return self._event["data"]["latest"].get("author", {}).get("name", "deleted")
@property
def edit_history(self) -> List[dict]:
"""The edit history of the comment"""
return self._event["data"]["history"]
@property
def number_of_edits(self) -> int:
return len(self.edit_history) | class_definition | 6,521 | 8,292 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/community.py | null | 41 |
class DiscussionStatusChange(DiscussionEvent):
"""A change of status in a Discussion / Pull Request.
Subclass of [`DiscussionEvent`].
Attributes:
id (`str`):
The ID of the event. An hexadecimal string.
type (`str`):
The type of the event.
created_at (`datetime`):
A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
object holding the creation timestamp for the event.
author (`str`):
The username of the Discussion / Pull Request author.
Can be `"deleted"` if the user has been deleted since.
new_status (`str`):
The status of the Discussion / Pull Request after the change.
It can be one of:
* `"open"`
* `"closed"`
* `"merged"` (only for Pull Requests )
"""
new_status: str | class_definition | 8,306 | 9,238 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/community.py | null | 42 |
class DiscussionCommit(DiscussionEvent):
"""A commit in a Pull Request.
Subclass of [`DiscussionEvent`].
Attributes:
id (`str`):
The ID of the event. An hexadecimal string.
type (`str`):
The type of the event.
created_at (`datetime`):
A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
object holding the creation timestamp for the event.
author (`str`):
The username of the Discussion / Pull Request author.
Can be `"deleted"` if the user has been deleted since.
summary (`str`):
The summary of the commit.
oid (`str`):
The OID / SHA of the commit, as a hexadecimal string.
"""
summary: str
oid: str | class_definition | 9,252 | 10,073 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/community.py | null | 43 |
class DiscussionTitleChange(DiscussionEvent):
"""A rename event in a Discussion / Pull Request.
Subclass of [`DiscussionEvent`].
Attributes:
id (`str`):
The ID of the event. An hexadecimal string.
type (`str`):
The type of the event.
created_at (`datetime`):
A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
object holding the creation timestamp for the event.
author (`str`):
The username of the Discussion / Pull Request author.
Can be `"deleted"` if the user has been deleted since.
old_title (`str`):
The previous title for the Discussion / Pull Request.
new_title (`str`):
The new title.
"""
old_title: str
new_title: str | class_definition | 10,087 | 10,936 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/community.py | null | 44 |
class InferenceApi:
"""Client to configure requests and make calls to the HuggingFace Inference API.
Example:
```python
>>> from huggingface_hub.inference_api import InferenceApi
>>> # Mask-fill example
>>> inference = InferenceApi("bert-base-uncased")
>>> inference(inputs="The goal of life is [MASK].")
[{'sequence': 'the goal of life is life.', 'score': 0.10933292657136917, 'token': 2166, 'token_str': 'life'}]
>>> # Question Answering example
>>> inference = InferenceApi("deepset/roberta-base-squad2")
>>> inputs = {
... "question": "What's my name?",
... "context": "My name is Clara and I live in Berkeley.",
... }
>>> inference(inputs)
{'score': 0.9326569437980652, 'start': 11, 'end': 16, 'answer': 'Clara'}
>>> # Zero-shot example
>>> inference = InferenceApi("typeform/distilbert-base-uncased-mnli")
>>> inputs = "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!"
>>> params = {"candidate_labels": ["refund", "legal", "faq"]}
>>> inference(inputs, params)
{'sequence': 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!', 'labels': ['refund', 'faq', 'legal'], 'scores': [0.9378499388694763, 0.04914155602455139, 0.013008488342165947]}
>>> # Overriding configured task
>>> inference = InferenceApi("bert-base-uncased", task="feature-extraction")
>>> # Text-to-image
>>> inference = InferenceApi("stabilityai/stable-diffusion-2-1")
>>> inference("cat")
<PIL.PngImagePlugin.PngImageFile image (...)>
>>> # Return as raw response to parse the output yourself
>>> inference = InferenceApi("mio/amadeus")
>>> response = inference("hello world", raw_response=True)
>>> response.headers
{"Content-Type": "audio/flac", ...}
>>> response.content # raw bytes from server
b'(...)'
```
"""
@validate_hf_hub_args
@_deprecate_method(
version="1.0",
message=(
"`InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out"
" this guide to learn how to convert your script to use it:"
" https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client."
),
)
def __init__(
self,
repo_id: str,
task: Optional[str] = None,
token: Optional[str] = None,
gpu: bool = False,
):
"""Inits headers and API call information.
Args:
repo_id (``str``):
Id of repository (e.g. `user/bert-base-uncased`).
task (``str``, `optional`, defaults ``None``):
Whether to force a task instead of using task specified in the
repository.
token (`str`, `optional`):
The API token to use as HTTP bearer authorization. This is not
the authentication token. You can find the token in
https://huggingface.co/settings/token. Alternatively, you can
find both your organizations and personal API tokens using
`HfApi().whoami(token)`.
gpu (`bool`, `optional`, defaults `False`):
Whether to use GPU instead of CPU for inference(requires Startup
plan at least).
"""
self.options = {"wait_for_model": True, "use_gpu": gpu}
self.headers = build_hf_headers(token=token)
# Configure task
model_info = HfApi(token=token).model_info(repo_id=repo_id)
if not model_info.pipeline_tag and not task:
raise ValueError(
"Task not specified in the repository. Please add it to the model card"
" using pipeline_tag"
" (https://huggingface.co/docs#how-is-a-models-type-of-inference-api-and-widget-determined)"
)
if task and task != model_info.pipeline_tag:
if task not in ALL_TASKS:
raise ValueError(f"Invalid task {task}. Make sure it's valid.")
logger.warning(
"You're using a different task than the one specified in the"
" repository. Be sure to know what you're doing :)"
)
self.task = task
else:
assert model_info.pipeline_tag is not None, "Pipeline tag cannot be None"
self.task = model_info.pipeline_tag
self.api_url = f"{constants.INFERENCE_ENDPOINT}/pipeline/{self.task}/{repo_id}"
def __repr__(self):
# Do not add headers to repr to avoid leaking token.
return f"InferenceAPI(api_url='{self.api_url}', task='{self.task}', options={self.options})"
def __call__(
self,
inputs: Optional[Union[str, Dict, List[str], List[List[str]]]] = None,
params: Optional[Dict] = None,
data: Optional[bytes] = None,
raw_response: bool = False,
) -> Any:
"""Make a call to the Inference API.
Args:
inputs (`str` or `Dict` or `List[str]` or `List[List[str]]`, *optional*):
Inputs for the prediction.
params (`Dict`, *optional*):
Additional parameters for the models. Will be sent as `parameters` in the
payload.
data (`bytes`, *optional*):
Bytes content of the request. In this case, leave `inputs` and `params` empty.
raw_response (`bool`, defaults to `False`):
If `True`, the raw `Response` object is returned. You can parse its content
as preferred. By default, the content is parsed into a more practical format
(json dictionary or PIL Image for example).
"""
# Build payload
payload: Dict[str, Any] = {
"options": self.options,
}
if inputs:
payload["inputs"] = inputs
if params:
payload["parameters"] = params
# Make API call
response = get_session().post(self.api_url, headers=self.headers, json=payload, data=data)
# Let the user handle the response
if raw_response:
return response
# By default, parse the response for the user.
content_type = response.headers.get("Content-Type") or ""
if content_type.startswith("image"):
if not is_pillow_available():
raise ImportError(
f"Task '{self.task}' returned as image but Pillow is not installed."
" Please install it (`pip install Pillow`) or pass"
" `raw_response=True` to get the raw `Response` object and parse"
" the image by yourself."
)
from PIL import Image
return Image.open(io.BytesIO(response.content))
elif content_type == "application/json":
return response.json()
else:
raise NotImplementedError(
f"{content_type} output type is not implemented yet. You can pass"
" `raw_response=True` to get the raw `Response` object and parse the"
" output by yourself."
) | class_definition | 1,026 | 8,322 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/inference_api.py | null | 45 |
class InferenceEndpointStatus(str, Enum):
PENDING = "pending"
INITIALIZING = "initializing"
UPDATING = "updating"
UPDATE_FAILED = "updateFailed"
RUNNING = "running"
PAUSED = "paused"
FAILED = "failed"
SCALED_TO_ZERO = "scaledToZero" | class_definition | 516 | 780 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_inference_endpoints.py | null | 46 |
class InferenceEndpointType(str, Enum):
PUBlIC = "public"
PROTECTED = "protected"
PRIVATE = "private" | class_definition | 783 | 896 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_inference_endpoints.py | null | 47 |
class InferenceEndpoint:
"""
Contains information about a deployed Inference Endpoint.
Args:
name (`str`):
The unique name of the Inference Endpoint.
namespace (`str`):
The namespace where the Inference Endpoint is located.
repository (`str`):
The name of the model repository deployed on this Inference Endpoint.
status ([`InferenceEndpointStatus`]):
The current status of the Inference Endpoint.
url (`str`, *optional*):
The URL of the Inference Endpoint, if available. Only a deployed Inference Endpoint will have a URL.
framework (`str`):
The machine learning framework used for the model.
revision (`str`):
The specific model revision deployed on the Inference Endpoint.
task (`str`):
The task associated with the deployed model.
created_at (`datetime.datetime`):
The timestamp when the Inference Endpoint was created.
updated_at (`datetime.datetime`):
The timestamp of the last update of the Inference Endpoint.
type ([`InferenceEndpointType`]):
The type of the Inference Endpoint (public, protected, private).
raw (`Dict`):
The raw dictionary data returned from the API.
token (`str` or `bool`, *optional*):
Authentication token for the Inference Endpoint, if set when requesting the API. Will default to the
locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server.
Example:
```python
>>> from huggingface_hub import get_inference_endpoint
>>> endpoint = get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)
# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'
# Run inference
>>> endpoint.client.text_to_image(...)
# Pause endpoint to save $$$
>>> endpoint.pause()
# ...
# Resume and wait for deployment
>>> endpoint.resume()
>>> endpoint.wait()
>>> endpoint.client.text_to_image(...)
```
"""
# Field in __repr__
name: str = field(init=False)
namespace: str
repository: str = field(init=False)
status: InferenceEndpointStatus = field(init=False)
url: Optional[str] = field(init=False)
# Other fields
framework: str = field(repr=False, init=False)
revision: str = field(repr=False, init=False)
task: str = field(repr=False, init=False)
created_at: datetime = field(repr=False, init=False)
updated_at: datetime = field(repr=False, init=False)
type: InferenceEndpointType = field(repr=False, init=False)
# Raw dict from the API
raw: Dict = field(repr=False)
# Internal fields
_token: Union[str, bool, None] = field(repr=False, compare=False)
_api: "HfApi" = field(repr=False, compare=False)
@classmethod
def from_raw(
cls, raw: Dict, namespace: str, token: Union[str, bool, None] = None, api: Optional["HfApi"] = None
) -> "InferenceEndpoint":
"""Initialize object from raw dictionary."""
if api is None:
from .hf_api import HfApi
api = HfApi()
if token is None:
token = api.token
# All other fields are populated in __post_init__
return cls(raw=raw, namespace=namespace, _token=token, _api=api)
def __post_init__(self) -> None:
"""Populate fields from raw dictionary."""
self._populate_from_raw()
@property
def client(self) -> InferenceClient:
"""Returns a client to make predictions on this Inference Endpoint.
Returns:
[`InferenceClient`]: an inference client pointing to the deployed endpoint.
Raises:
[`InferenceEndpointError`]: If the Inference Endpoint is not yet deployed.
"""
if self.url is None:
raise InferenceEndpointError(
"Cannot create a client for this Inference Endpoint as it is not yet deployed. "
"Please wait for the Inference Endpoint to be deployed using `endpoint.wait()` and try again."
)
return InferenceClient(model=self.url, token=self._token)
@property
def async_client(self) -> AsyncInferenceClient:
"""Returns a client to make predictions on this Inference Endpoint.
Returns:
[`AsyncInferenceClient`]: an asyncio-compatible inference client pointing to the deployed endpoint.
Raises:
[`InferenceEndpointError`]: If the Inference Endpoint is not yet deployed.
"""
if self.url is None:
raise InferenceEndpointError(
"Cannot create a client for this Inference Endpoint as it is not yet deployed. "
"Please wait for the Inference Endpoint to be deployed using `endpoint.wait()` and try again."
)
return AsyncInferenceClient(model=self.url, token=self._token)
def wait(self, timeout: Optional[int] = None, refresh_every: int = 5) -> "InferenceEndpoint":
"""Wait for the Inference Endpoint to be deployed.
Information from the server will be fetched every 1s. If the Inference Endpoint is not deployed after `timeout`
seconds, a [`InferenceEndpointTimeoutError`] will be raised. The [`InferenceEndpoint`] will be mutated in place with the latest
data.
Args:
timeout (`int`, *optional*):
The maximum time to wait for the Inference Endpoint to be deployed, in seconds. If `None`, will wait
indefinitely.
refresh_every (`int`, *optional*):
The time to wait between each fetch of the Inference Endpoint status, in seconds. Defaults to 5s.
Returns:
[`InferenceEndpoint`]: the same Inference Endpoint, mutated in place with the latest data.
Raises:
[`InferenceEndpointError`]
If the Inference Endpoint ended up in a failed state.
[`InferenceEndpointTimeoutError`]
If the Inference Endpoint is not deployed after `timeout` seconds.
"""
if timeout is not None and timeout < 0:
raise ValueError("`timeout` cannot be negative.")
if refresh_every <= 0:
raise ValueError("`refresh_every` must be positive.")
start = time.time()
while True:
if self.url is not None:
# Means the URL is provisioned => check if the endpoint is reachable
response = get_session().get(self.url, headers=self._api._build_hf_headers(token=self._token))
if response.status_code == 200:
logger.info("Inference Endpoint is ready to be used.")
return self
if self.status == InferenceEndpointStatus.FAILED:
raise InferenceEndpointError(
f"Inference Endpoint {self.name} failed to deploy. Please check the logs for more information."
)
if timeout is not None:
if time.time() - start > timeout:
raise InferenceEndpointTimeoutError("Timeout while waiting for Inference Endpoint to be deployed.")
logger.info(f"Inference Endpoint is not deployed yet ({self.status}). Waiting {refresh_every}s...")
time.sleep(refresh_every)
self.fetch()
def fetch(self) -> "InferenceEndpoint":
"""Fetch latest information about the Inference Endpoint.
Returns:
[`InferenceEndpoint`]: the same Inference Endpoint, mutated in place with the latest data.
"""
obj = self._api.get_inference_endpoint(name=self.name, namespace=self.namespace, token=self._token) # type: ignore [arg-type]
self.raw = obj.raw
self._populate_from_raw()
return self
def update(
self,
*,
# Compute update
accelerator: Optional[str] = None,
instance_size: Optional[str] = None,
instance_type: Optional[str] = None,
min_replica: Optional[int] = None,
max_replica: Optional[int] = None,
scale_to_zero_timeout: Optional[int] = None,
# Model update
repository: Optional[str] = None,
framework: Optional[str] = None,
revision: Optional[str] = None,
task: Optional[str] = None,
custom_image: Optional[Dict] = None,
secrets: Optional[Dict[str, str]] = None,
) -> "InferenceEndpoint":
"""Update the Inference Endpoint.
This method allows the update of either the compute configuration, the deployed model, or both. All arguments are
optional but at least one must be provided.
This is an alias for [`HfApi.update_inference_endpoint`]. The current object is mutated in place with the
latest data from the server.
Args:
accelerator (`str`, *optional*):
The hardware accelerator to be used for inference (e.g. `"cpu"`).
instance_size (`str`, *optional*):
The size or type of the instance to be used for hosting the model (e.g. `"x4"`).
instance_type (`str`, *optional*):
The cloud instance type where the Inference Endpoint will be deployed (e.g. `"intel-icl"`).
min_replica (`int`, *optional*):
The minimum number of replicas (instances) to keep running for the Inference Endpoint.
max_replica (`int`, *optional*):
The maximum number of replicas (instances) to scale to for the Inference Endpoint.
scale_to_zero_timeout (`int`, *optional*):
The duration in minutes before an inactive endpoint is scaled to zero.
repository (`str`, *optional*):
The name of the model repository associated with the Inference Endpoint (e.g. `"gpt2"`).
framework (`str`, *optional*):
The machine learning framework used for the model (e.g. `"custom"`).
revision (`str`, *optional*):
The specific model revision to deploy on the Inference Endpoint (e.g. `"6c0e6080953db56375760c0471a8c5f2929baf11"`).
task (`str`, *optional*):
The task on which to deploy the model (e.g. `"text-classification"`).
custom_image (`Dict`, *optional*):
A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an
Inference Endpoint running on the `text-generation-inference` (TGI) framework (see examples).
secrets (`Dict[str, str]`, *optional*):
Secret values to inject in the container environment.
Returns:
[`InferenceEndpoint`]: the same Inference Endpoint, mutated in place with the latest data.
"""
# Make API call
obj = self._api.update_inference_endpoint(
name=self.name,
namespace=self.namespace,
accelerator=accelerator,
instance_size=instance_size,
instance_type=instance_type,
min_replica=min_replica,
max_replica=max_replica,
scale_to_zero_timeout=scale_to_zero_timeout,
repository=repository,
framework=framework,
revision=revision,
task=task,
custom_image=custom_image,
secrets=secrets,
token=self._token, # type: ignore [arg-type]
)
# Mutate current object
self.raw = obj.raw
self._populate_from_raw()
return self
def pause(self) -> "InferenceEndpoint":
"""Pause the Inference Endpoint.
A paused Inference Endpoint will not be charged. It can be resumed at any time using [`InferenceEndpoint.resume`].
This is different than scaling the Inference Endpoint to zero with [`InferenceEndpoint.scale_to_zero`], which
would be automatically restarted when a request is made to it.
This is an alias for [`HfApi.pause_inference_endpoint`]. The current object is mutated in place with the
latest data from the server.
Returns:
[`InferenceEndpoint`]: the same Inference Endpoint, mutated in place with the latest data.
"""
obj = self._api.pause_inference_endpoint(name=self.name, namespace=self.namespace, token=self._token) # type: ignore [arg-type]
self.raw = obj.raw
self._populate_from_raw()
return self
def resume(self, running_ok: bool = True) -> "InferenceEndpoint":
"""Resume the Inference Endpoint.
This is an alias for [`HfApi.resume_inference_endpoint`]. The current object is mutated in place with the
latest data from the server.
Args:
running_ok (`bool`, *optional*):
If `True`, the method will not raise an error if the Inference Endpoint is already running. Defaults to
`True`.
Returns:
[`InferenceEndpoint`]: the same Inference Endpoint, mutated in place with the latest data.
"""
obj = self._api.resume_inference_endpoint(
name=self.name, namespace=self.namespace, running_ok=running_ok, token=self._token
) # type: ignore [arg-type]
self.raw = obj.raw
self._populate_from_raw()
return self
def scale_to_zero(self) -> "InferenceEndpoint":
"""Scale Inference Endpoint to zero.
An Inference Endpoint scaled to zero will not be charged. It will be resume on the next request to it, with a
cold start delay. This is different than pausing the Inference Endpoint with [`InferenceEndpoint.pause`], which
would require a manual resume with [`InferenceEndpoint.resume`].
This is an alias for [`HfApi.scale_to_zero_inference_endpoint`]. The current object is mutated in place with the
latest data from the server.
Returns:
[`InferenceEndpoint`]: the same Inference Endpoint, mutated in place with the latest data.
"""
obj = self._api.scale_to_zero_inference_endpoint(name=self.name, namespace=self.namespace, token=self._token) # type: ignore [arg-type]
self.raw = obj.raw
self._populate_from_raw()
return self
def delete(self) -> None:
"""Delete the Inference Endpoint.
This operation is not reversible. If you don't want to be charged for an Inference Endpoint, it is preferable
to pause it with [`InferenceEndpoint.pause`] or scale it to zero with [`InferenceEndpoint.scale_to_zero`].
This is an alias for [`HfApi.delete_inference_endpoint`].
"""
self._api.delete_inference_endpoint(name=self.name, namespace=self.namespace, token=self._token) # type: ignore [arg-type]
def _populate_from_raw(self) -> None:
"""Populate fields from raw dictionary.
Called in __post_init__ + each time the Inference Endpoint is updated.
"""
# Repr fields
self.name = self.raw["name"]
self.repository = self.raw["model"]["repository"]
self.status = self.raw["status"]["state"]
self.url = self.raw["status"].get("url")
# Other fields
self.framework = self.raw["model"]["framework"]
self.revision = self.raw["model"]["revision"]
self.task = self.raw["model"]["task"]
self.created_at = parse_datetime(self.raw["status"]["createdAt"])
self.updated_at = parse_datetime(self.raw["status"]["updatedAt"])
self.type = self.raw["type"] | class_definition | 910 | 16,749 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/_inference_endpoints.py | null | 48 |
class LastCommitInfo(dict):
oid: str
title: str
date: datetime
def __post_init__(self): # hack to make LastCommitInfo backward compatible
self.update(asdict(self)) | class_definition | 9,657 | 9,846 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 49 |
class BlobLfsInfo(dict):
size: int
sha256: str
pointer_size: int
def __post_init__(self): # hack to make BlobLfsInfo backward compatible
self.update(asdict(self)) | class_definition | 9,860 | 10,048 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 50 |
class BlobSecurityInfo(dict):
safe: bool # duplicate information with "status" field, keeping it for backward compatibility
status: str
av_scan: Optional[Dict]
pickle_import_scan: Optional[Dict]
def __post_init__(self): # hack to make BlogSecurityInfo backward compatible
self.update(asdict(self)) | class_definition | 10,062 | 10,390 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 51 |
class TransformersInfo(dict):
auto_model: str
custom_class: Optional[str] = None
# possible `pipeline_tag` values: https://github.com/huggingface/huggingface.js/blob/3ee32554b8620644a6287e786b2a83bf5caf559c/packages/tasks/src/pipelines.ts#L72
pipeline_tag: Optional[str] = None
processor: Optional[str] = None
def __post_init__(self): # hack to make TransformersInfo backward compatible
self.update(asdict(self)) | class_definition | 10,404 | 10,850 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 52 |
class SafeTensorsInfo(dict):
parameters: Dict[str, int]
total: int
def __post_init__(self): # hack to make SafeTensorsInfo backward compatible
self.update(asdict(self)) | class_definition | 10,864 | 11,054 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 53 |
class CommitInfo(str):
"""Data structure containing information about a newly created commit.
Returned by any method that creates a commit on the Hub: [`create_commit`], [`upload_file`], [`upload_folder`],
[`delete_file`], [`delete_folder`]. It inherits from `str` for backward compatibility but using methods specific
to `str` is deprecated.
Attributes:
commit_url (`str`):
Url where to find the commit.
commit_message (`str`):
The summary (first line) of the commit that has been created.
commit_description (`str`):
Description of the commit that has been created. Can be empty.
oid (`str`):
Commit hash id. Example: `"91c54ad1727ee830252e457677f467be0bfd8a57"`.
pr_url (`str`, *optional*):
Url to the PR that has been created, if any. Populated when `create_pr=True`
is passed.
pr_revision (`str`, *optional*):
Revision of the PR that has been created, if any. Populated when
`create_pr=True` is passed. Example: `"refs/pr/1"`.
pr_num (`int`, *optional*):
Number of the PR discussion that has been created, if any. Populated when
`create_pr=True` is passed. Can be passed as `discussion_num` in
[`get_discussion_details`]. Example: `1`.
repo_url (`RepoUrl`):
Repo URL of the commit containing info like repo_id, repo_type, etc.
_url (`str`, *optional*):
Legacy url for `str` compatibility. Can be the url to the uploaded file on the Hub (if returned by
[`upload_file`]), to the uploaded folder on the Hub (if returned by [`upload_folder`]) or to the commit on
the Hub (if returned by [`create_commit`]). Defaults to `commit_url`. It is deprecated to use this
attribute. Please use `commit_url` instead.
"""
commit_url: str
commit_message: str
commit_description: str
oid: str
pr_url: Optional[str] = None
# Computed from `commit_url` in `__post_init__`
repo_url: RepoUrl = field(init=False)
# Computed from `pr_url` in `__post_init__`
pr_revision: Optional[str] = field(init=False)
pr_num: Optional[str] = field(init=False)
# legacy url for `str` compatibility (ex: url to uploaded file, url to uploaded folder, url to PR, etc.)
_url: str = field(repr=False, default=None) # type: ignore # defaults to `commit_url`
def __new__(cls, *args, commit_url: str, _url: Optional[str] = None, **kwargs):
return str.__new__(cls, _url or commit_url)
def __post_init__(self):
"""Populate pr-related fields after initialization.
See https://docs.python.org/3.10/library/dataclasses.html#post-init-processing.
"""
# Repo info
self.repo_url = RepoUrl(self.commit_url.split("/commit/")[0])
# PR info
if self.pr_url is not None:
self.pr_revision = _parse_revision_from_pr_url(self.pr_url)
self.pr_num = int(self.pr_revision.split("/")[-1])
else:
self.pr_revision = None
self.pr_num = None | class_definition | 11,068 | 14,227 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 54 |
class AccessRequest:
"""Data structure containing information about a user access request.
Attributes:
username (`str`):
Username of the user who requested access.
fullname (`str`):
Fullname of the user who requested access.
email (`Optional[str]`):
Email of the user who requested access.
Can only be `None` in the /accepted list if the user was granted access manually.
timestamp (`datetime`):
Timestamp of the request.
status (`Literal["pending", "accepted", "rejected"]`):
Status of the request. Can be one of `["pending", "accepted", "rejected"]`.
fields (`Dict[str, Any]`, *optional*):
Additional fields filled by the user in the gate form.
"""
username: str
fullname: str
email: Optional[str]
timestamp: datetime
status: Literal["pending", "accepted", "rejected"]
# Additional fields filled by the user in the gate form
fields: Optional[Dict[str, Any]] = None | class_definition | 14,241 | 15,282 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 55 |
class WebhookWatchedItem:
"""Data structure containing information about the items watched by a webhook.
Attributes:
type (`Literal["dataset", "model", "org", "space", "user"]`):
Type of the item to be watched. Can be one of `["dataset", "model", "org", "space", "user"]`.
name (`str`):
Name of the item to be watched. Can be the username, organization name, model name, dataset name or space name.
"""
type: Literal["dataset", "model", "org", "space", "user"]
name: str | class_definition | 15,296 | 15,828 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 56 |
class WebhookInfo:
"""Data structure containing information about a webhook.
Attributes:
id (`str`):
ID of the webhook.
url (`str`):
URL of the webhook.
watched (`List[WebhookWatchedItem]`):
List of items watched by the webhook, see [`WebhookWatchedItem`].
domains (`List[WEBHOOK_DOMAIN_T]`):
List of domains the webhook is watching. Can be one of `["repo", "discussions"]`.
secret (`str`, *optional*):
Secret of the webhook.
disabled (`bool`):
Whether the webhook is disabled or not.
"""
id: str
url: str
watched: List[WebhookWatchedItem]
domains: List[constants.WEBHOOK_DOMAIN_T]
secret: Optional[str]
disabled: bool | class_definition | 15,842 | 16,618 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 57 |
class RepoUrl(str):
"""Subclass of `str` describing a repo URL on the Hub.
`RepoUrl` is returned by `HfApi.create_repo`. It inherits from `str` for backward
compatibility. At initialization, the URL is parsed to populate properties:
- endpoint (`str`)
- namespace (`Optional[str]`)
- repo_name (`str`)
- repo_id (`str`)
- repo_type (`Literal["model", "dataset", "space"]`)
- url (`str`)
Args:
url (`Any`):
String value of the repo url.
endpoint (`str`, *optional*):
Endpoint of the Hub. Defaults to <https://huggingface.co>.
Example:
```py
>>> RepoUrl('https://huggingface.co/gpt2')
RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2')
>>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co')
RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset')
>>> RepoUrl('hf://datasets/my-user/my-dataset')
RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset')
>>> HfApi.create_repo("dummy_model")
RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model')
```
Raises:
[`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
If URL cannot be parsed.
[`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
If `repo_type` is unknown.
"""
def __new__(cls, url: Any, endpoint: Optional[str] = None):
url = fix_hf_endpoint_in_url(url, endpoint=endpoint)
return super(RepoUrl, cls).__new__(cls, url)
def __init__(self, url: Any, endpoint: Optional[str] = None) -> None:
super().__init__()
# Parse URL
self.endpoint = endpoint or constants.ENDPOINT
repo_type, namespace, repo_name = repo_type_and_id_from_hf_id(self, hub_url=self.endpoint)
# Populate fields
self.namespace = namespace
self.repo_name = repo_name
self.repo_id = repo_name if namespace is None else f"{namespace}/{repo_name}"
self.repo_type = repo_type or constants.REPO_TYPE_MODEL
self.url = str(self) # just in case it's needed
def __repr__(self) -> str:
return f"RepoUrl('{self}', endpoint='{self.endpoint}', repo_type='{self.repo_type}', repo_id='{self.repo_id}')" | class_definition | 16,621 | 19,267 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 58 |
class RepoSibling:
"""
Contains basic information about a repo file inside a repo on the Hub.
<Tip>
All attributes of this class are optional except `rfilename`. This is because only the file names are returned when
listing repositories on the Hub (with [`list_models`], [`list_datasets`] or [`list_spaces`]). If you need more
information like file size, blob id or lfs details, you must request them specifically from one repo at a time
(using [`model_info`], [`dataset_info`] or [`space_info`]) as it adds more constraints on the backend server to
retrieve these.
</Tip>
Attributes:
rfilename (str):
file name, relative to the repo root.
size (`int`, *optional*):
The file's size, in bytes. This attribute is defined when `files_metadata` argument of [`repo_info`] is set
to `True`. It's `None` otherwise.
blob_id (`str`, *optional*):
The file's git OID. This attribute is defined when `files_metadata` argument of [`repo_info`] is set to
`True`. It's `None` otherwise.
lfs (`BlobLfsInfo`, *optional*):
The file's LFS metadata. This attribute is defined when`files_metadata` argument of [`repo_info`] is set to
`True` and the file is stored with Git LFS. It's `None` otherwise.
"""
rfilename: str
size: Optional[int] = None
blob_id: Optional[str] = None
lfs: Optional[BlobLfsInfo] = None | class_definition | 19,281 | 20,751 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 59 |
class RepoFile:
"""
Contains information about a file on the Hub.
Attributes:
path (str):
file path relative to the repo root.
size (`int`):
The file's size, in bytes.
blob_id (`str`):
The file's git OID.
lfs (`BlobLfsInfo`):
The file's LFS metadata.
last_commit (`LastCommitInfo`, *optional*):
The file's last commit metadata. Only defined if [`list_repo_tree`] and [`get_paths_info`]
are called with `expand=True`.
security (`BlobSecurityInfo`, *optional*):
The file's security scan metadata. Only defined if [`list_repo_tree`] and [`get_paths_info`]
are called with `expand=True`.
"""
path: str
size: int
blob_id: str
lfs: Optional[BlobLfsInfo] = None
last_commit: Optional[LastCommitInfo] = None
security: Optional[BlobSecurityInfo] = None
def __init__(self, **kwargs):
self.path = kwargs.pop("path")
self.size = kwargs.pop("size")
self.blob_id = kwargs.pop("oid")
lfs = kwargs.pop("lfs", None)
if lfs is not None:
lfs = BlobLfsInfo(size=lfs["size"], sha256=lfs["oid"], pointer_size=lfs["pointerSize"])
self.lfs = lfs
last_commit = kwargs.pop("lastCommit", None) or kwargs.pop("last_commit", None)
if last_commit is not None:
last_commit = LastCommitInfo(
oid=last_commit["id"], title=last_commit["title"], date=parse_datetime(last_commit["date"])
)
self.last_commit = last_commit
security = kwargs.pop("securityFileStatus", None)
if security is not None:
safe = security["status"] == "safe"
security = BlobSecurityInfo(
safe=safe,
status=security["status"],
av_scan=security["avScan"],
pickle_import_scan=security["pickleImportScan"],
)
self.security = security
# backwards compatibility
self.rfilename = self.path
self.lastCommit = self.last_commit | class_definition | 20,765 | 22,883 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 60 |
class RepoFolder:
"""
Contains information about a folder on the Hub.
Attributes:
path (str):
folder path relative to the repo root.
tree_id (`str`):
The folder's git OID.
last_commit (`LastCommitInfo`, *optional*):
The folder's last commit metadata. Only defined if [`list_repo_tree`] and [`get_paths_info`]
are called with `expand=True`.
"""
path: str
tree_id: str
last_commit: Optional[LastCommitInfo] = None
def __init__(self, **kwargs):
self.path = kwargs.pop("path")
self.tree_id = kwargs.pop("oid")
last_commit = kwargs.pop("lastCommit", None) or kwargs.pop("last_commit", None)
if last_commit is not None:
last_commit = LastCommitInfo(
oid=last_commit["id"], title=last_commit["title"], date=parse_datetime(last_commit["date"])
)
self.last_commit = last_commit | class_definition | 22,897 | 23,852 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 61 |
class ModelInfo:
"""
Contains information about a model on the Hub.
<Tip>
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
In general, the more specific the query, the more information is returned. On the contrary, when listing models
using [`list_models`] only a subset of the attributes are returned.
</Tip>
Attributes:
id (`str`):
ID of model.
author (`str`, *optional*):
Author of the model.
sha (`str`, *optional*):
Repo SHA at this particular revision.
created_at (`datetime`, *optional*):
Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
corresponding to the date when we began to store creation dates.
last_modified (`datetime`, *optional*):
Date of last commit to the repo.
private (`bool`):
Is the repo private.
disabled (`bool`, *optional*):
Is the repo disabled.
downloads (`int`):
Number of downloads of the model over the last 30 days.
downloads_all_time (`int`):
Cumulated number of downloads of the model since its creation.
gated (`Literal["auto", "manual", False]`, *optional*):
Is the repo gated.
If so, whether there is manual or automatic approval.
gguf (`Dict`, *optional*):
GGUF information of the model.
inference (`Literal["cold", "frozen", "warm"]`, *optional*):
Status of the model on the inference API.
Warm models are available for immediate use. Cold models will be loaded on first inference call.
Frozen models are not available in Inference API.
likes (`int`):
Number of likes of the model.
library_name (`str`, *optional*):
Library associated with the model.
tags (`List[str]`):
List of tags of the model. Compared to `card_data.tags`, contains extra tags computed by the Hub
(e.g. supported libraries, model's arXiv).
pipeline_tag (`str`, *optional*):
Pipeline tag associated with the model.
mask_token (`str`, *optional*):
Mask token used by the model.
widget_data (`Any`, *optional*):
Widget data associated with the model.
model_index (`Dict`, *optional*):
Model index for evaluation.
config (`Dict`, *optional*):
Model configuration.
transformers_info (`TransformersInfo`, *optional*):
Transformers-specific info (auto class, processor, etc.) associated with the model.
trending_score (`int`, *optional*):
Trending score of the model.
card_data (`ModelCardData`, *optional*):
Model Card Metadata as a [`huggingface_hub.repocard_data.ModelCardData`] object.
siblings (`List[RepoSibling]`):
List of [`huggingface_hub.hf_api.RepoSibling`] objects that constitute the model.
spaces (`List[str]`, *optional*):
List of spaces using the model.
safetensors (`SafeTensorsInfo`, *optional*):
Model's safetensors information.
security_repo_status (`Dict`, *optional*):
Model's security scan status.
"""
id: str
author: Optional[str]
sha: Optional[str]
created_at: Optional[datetime]
last_modified: Optional[datetime]
private: Optional[bool]
disabled: Optional[bool]
downloads: Optional[int]
downloads_all_time: Optional[int]
gated: Optional[Literal["auto", "manual", False]]
gguf: Optional[Dict]
inference: Optional[Literal["warm", "cold", "frozen"]]
likes: Optional[int]
library_name: Optional[str]
tags: Optional[List[str]]
pipeline_tag: Optional[str]
mask_token: Optional[str]
card_data: Optional[ModelCardData]
widget_data: Optional[Any]
model_index: Optional[Dict]
config: Optional[Dict]
transformers_info: Optional[TransformersInfo]
trending_score: Optional[int]
siblings: Optional[List[RepoSibling]]
spaces: Optional[List[str]]
safetensors: Optional[SafeTensorsInfo]
security_repo_status: Optional[Dict]
def __init__(self, **kwargs):
self.id = kwargs.pop("id")
self.author = kwargs.pop("author", None)
self.sha = kwargs.pop("sha", None)
last_modified = kwargs.pop("lastModified", None) or kwargs.pop("last_modified", None)
self.last_modified = parse_datetime(last_modified) if last_modified else None
created_at = kwargs.pop("createdAt", None) or kwargs.pop("created_at", None)
self.created_at = parse_datetime(created_at) if created_at else None
self.private = kwargs.pop("private", None)
self.gated = kwargs.pop("gated", None)
self.disabled = kwargs.pop("disabled", None)
self.downloads = kwargs.pop("downloads", None)
self.downloads_all_time = kwargs.pop("downloadsAllTime", None)
self.likes = kwargs.pop("likes", None)
self.library_name = kwargs.pop("library_name", None)
self.gguf = kwargs.pop("gguf", None)
self.inference = kwargs.pop("inference", None)
self.tags = kwargs.pop("tags", None)
self.pipeline_tag = kwargs.pop("pipeline_tag", None)
self.mask_token = kwargs.pop("mask_token", None)
self.trending_score = kwargs.pop("trendingScore", None)
card_data = kwargs.pop("cardData", None) or kwargs.pop("card_data", None)
self.card_data = (
ModelCardData(**card_data, ignore_metadata_errors=True) if isinstance(card_data, dict) else card_data
)
self.widget_data = kwargs.pop("widgetData", None)
self.model_index = kwargs.pop("model-index", None) or kwargs.pop("model_index", None)
self.config = kwargs.pop("config", None)
transformers_info = kwargs.pop("transformersInfo", None) or kwargs.pop("transformers_info", None)
self.transformers_info = TransformersInfo(**transformers_info) if transformers_info else None
siblings = kwargs.pop("siblings", None)
self.siblings = (
[
RepoSibling(
rfilename=sibling["rfilename"],
size=sibling.get("size"),
blob_id=sibling.get("blobId"),
lfs=(
BlobLfsInfo(
size=sibling["lfs"]["size"],
sha256=sibling["lfs"]["sha256"],
pointer_size=sibling["lfs"]["pointerSize"],
)
if sibling.get("lfs")
else None
),
)
for sibling in siblings
]
if siblings is not None
else None
)
self.spaces = kwargs.pop("spaces", None)
safetensors = kwargs.pop("safetensors", None)
self.safetensors = (
SafeTensorsInfo(
parameters=safetensors["parameters"],
total=safetensors["total"],
)
if safetensors
else None
)
self.security_repo_status = kwargs.pop("securityRepoStatus", None)
# backwards compatibility
self.lastModified = self.last_modified
self.cardData = self.card_data
self.transformersInfo = self.transformers_info
self.__dict__.update(**kwargs) | class_definition | 23,866 | 31,439 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 62 |
class DatasetInfo:
"""
Contains information about a dataset on the Hub.
<Tip>
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
In general, the more specific the query, the more information is returned. On the contrary, when listing datasets
using [`list_datasets`] only a subset of the attributes are returned.
</Tip>
Attributes:
id (`str`):
ID of dataset.
author (`str`):
Author of the dataset.
sha (`str`):
Repo SHA at this particular revision.
created_at (`datetime`, *optional*):
Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
corresponding to the date when we began to store creation dates.
last_modified (`datetime`, *optional*):
Date of last commit to the repo.
private (`bool`):
Is the repo private.
disabled (`bool`, *optional*):
Is the repo disabled.
gated (`Literal["auto", "manual", False]`, *optional*):
Is the repo gated.
If so, whether there is manual or automatic approval.
downloads (`int`):
Number of downloads of the dataset over the last 30 days.
downloads_all_time (`int`):
Cumulated number of downloads of the model since its creation.
likes (`int`):
Number of likes of the dataset.
tags (`List[str]`):
List of tags of the dataset.
card_data (`DatasetCardData`, *optional*):
Model Card Metadata as a [`huggingface_hub.repocard_data.DatasetCardData`] object.
siblings (`List[RepoSibling]`):
List of [`huggingface_hub.hf_api.RepoSibling`] objects that constitute the dataset.
paperswithcode_id (`str`, *optional*):
Papers with code ID of the dataset.
trending_score (`int`, *optional*):
Trending score of the dataset.
"""
id: str
author: Optional[str]
sha: Optional[str]
created_at: Optional[datetime]
last_modified: Optional[datetime]
private: Optional[bool]
gated: Optional[Literal["auto", "manual", False]]
disabled: Optional[bool]
downloads: Optional[int]
downloads_all_time: Optional[int]
likes: Optional[int]
paperswithcode_id: Optional[str]
tags: Optional[List[str]]
trending_score: Optional[int]
card_data: Optional[DatasetCardData]
siblings: Optional[List[RepoSibling]]
def __init__(self, **kwargs):
self.id = kwargs.pop("id")
self.author = kwargs.pop("author", None)
self.sha = kwargs.pop("sha", None)
created_at = kwargs.pop("createdAt", None) or kwargs.pop("created_at", None)
self.created_at = parse_datetime(created_at) if created_at else None
last_modified = kwargs.pop("lastModified", None) or kwargs.pop("last_modified", None)
self.last_modified = parse_datetime(last_modified) if last_modified else None
self.private = kwargs.pop("private", None)
self.gated = kwargs.pop("gated", None)
self.disabled = kwargs.pop("disabled", None)
self.downloads = kwargs.pop("downloads", None)
self.downloads_all_time = kwargs.pop("downloadsAllTime", None)
self.likes = kwargs.pop("likes", None)
self.paperswithcode_id = kwargs.pop("paperswithcode_id", None)
self.tags = kwargs.pop("tags", None)
self.trending_score = kwargs.pop("trendingScore", None)
card_data = kwargs.pop("cardData", None) or kwargs.pop("card_data", None)
self.card_data = (
DatasetCardData(**card_data, ignore_metadata_errors=True) if isinstance(card_data, dict) else card_data
)
siblings = kwargs.pop("siblings", None)
self.siblings = (
[
RepoSibling(
rfilename=sibling["rfilename"],
size=sibling.get("size"),
blob_id=sibling.get("blobId"),
lfs=(
BlobLfsInfo(
size=sibling["lfs"]["size"],
sha256=sibling["lfs"]["sha256"],
pointer_size=sibling["lfs"]["pointerSize"],
)
if sibling.get("lfs")
else None
),
)
for sibling in siblings
]
if siblings is not None
else None
)
# backwards compatibility
self.lastModified = self.last_modified
self.cardData = self.card_data
self.__dict__.update(**kwargs) | class_definition | 31,453 | 36,212 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 63 |
class SpaceInfo:
"""
Contains information about a Space on the Hub.
<Tip>
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
In general, the more specific the query, the more information is returned. On the contrary, when listing spaces
using [`list_spaces`] only a subset of the attributes are returned.
</Tip>
Attributes:
id (`str`):
ID of the Space.
author (`str`, *optional*):
Author of the Space.
sha (`str`, *optional*):
Repo SHA at this particular revision.
created_at (`datetime`, *optional*):
Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
corresponding to the date when we began to store creation dates.
last_modified (`datetime`, *optional*):
Date of last commit to the repo.
private (`bool`):
Is the repo private.
gated (`Literal["auto", "manual", False]`, *optional*):
Is the repo gated.
If so, whether there is manual or automatic approval.
disabled (`bool`, *optional*):
Is the Space disabled.
host (`str`, *optional*):
Host URL of the Space.
subdomain (`str`, *optional*):
Subdomain of the Space.
likes (`int`):
Number of likes of the Space.
tags (`List[str]`):
List of tags of the Space.
siblings (`List[RepoSibling]`):
List of [`huggingface_hub.hf_api.RepoSibling`] objects that constitute the Space.
card_data (`SpaceCardData`, *optional*):
Space Card Metadata as a [`huggingface_hub.repocard_data.SpaceCardData`] object.
runtime (`SpaceRuntime`, *optional*):
Space runtime information as a [`huggingface_hub.hf_api.SpaceRuntime`] object.
sdk (`str`, *optional*):
SDK used by the Space.
models (`List[str]`, *optional*):
List of models used by the Space.
datasets (`List[str]`, *optional*):
List of datasets used by the Space.
trending_score (`int`, *optional*):
Trending score of the Space.
"""
id: str
author: Optional[str]
sha: Optional[str]
created_at: Optional[datetime]
last_modified: Optional[datetime]
private: Optional[bool]
gated: Optional[Literal["auto", "manual", False]]
disabled: Optional[bool]
host: Optional[str]
subdomain: Optional[str]
likes: Optional[int]
sdk: Optional[str]
tags: Optional[List[str]]
siblings: Optional[List[RepoSibling]]
trending_score: Optional[int]
card_data: Optional[SpaceCardData]
runtime: Optional[SpaceRuntime]
models: Optional[List[str]]
datasets: Optional[List[str]]
def __init__(self, **kwargs):
self.id = kwargs.pop("id")
self.author = kwargs.pop("author", None)
self.sha = kwargs.pop("sha", None)
created_at = kwargs.pop("createdAt", None) or kwargs.pop("created_at", None)
self.created_at = parse_datetime(created_at) if created_at else None
last_modified = kwargs.pop("lastModified", None) or kwargs.pop("last_modified", None)
self.last_modified = parse_datetime(last_modified) if last_modified else None
self.private = kwargs.pop("private", None)
self.gated = kwargs.pop("gated", None)
self.disabled = kwargs.pop("disabled", None)
self.host = kwargs.pop("host", None)
self.subdomain = kwargs.pop("subdomain", None)
self.likes = kwargs.pop("likes", None)
self.sdk = kwargs.pop("sdk", None)
self.tags = kwargs.pop("tags", None)
self.trending_score = kwargs.pop("trendingScore", None)
card_data = kwargs.pop("cardData", None) or kwargs.pop("card_data", None)
self.card_data = (
SpaceCardData(**card_data, ignore_metadata_errors=True) if isinstance(card_data, dict) else card_data
)
siblings = kwargs.pop("siblings", None)
self.siblings = (
[
RepoSibling(
rfilename=sibling["rfilename"],
size=sibling.get("size"),
blob_id=sibling.get("blobId"),
lfs=(
BlobLfsInfo(
size=sibling["lfs"]["size"],
sha256=sibling["lfs"]["sha256"],
pointer_size=sibling["lfs"]["pointerSize"],
)
if sibling.get("lfs")
else None
),
)
for sibling in siblings
]
if siblings is not None
else None
)
runtime = kwargs.pop("runtime", None)
self.runtime = SpaceRuntime(runtime) if runtime else None
self.models = kwargs.pop("models", None)
self.datasets = kwargs.pop("datasets", None)
# backwards compatibility
self.lastModified = self.last_modified
self.cardData = self.card_data
self.__dict__.update(**kwargs) | class_definition | 36,226 | 41,445 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 64 |
class CollectionItem:
"""
Contains information about an item of a Collection (model, dataset, Space or paper).
Attributes:
item_object_id (`str`):
Unique ID of the item in the collection.
item_id (`str`):
ID of the underlying object on the Hub. Can be either a repo_id or a paper id
e.g. `"jbilcke-hf/ai-comic-factory"`, `"2307.09288"`.
item_type (`str`):
Type of the underlying object. Can be one of `"model"`, `"dataset"`, `"space"` or `"paper"`.
position (`int`):
Position of the item in the collection.
note (`str`, *optional*):
Note associated with the item, as plain text.
"""
item_object_id: str # id in database
item_id: str # repo_id or paper id
item_type: str
position: int
note: Optional[str] = None
def __init__(
self, _id: str, id: str, type: CollectionItemType_T, position: int, note: Optional[Dict] = None, **kwargs
) -> None:
self.item_object_id: str = _id # id in database
self.item_id: str = id # repo_id or paper id
self.item_type: CollectionItemType_T = type
self.position: int = position
self.note: str = note["text"] if note is not None else None | class_definition | 41,459 | 42,738 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 65 |
class Collection:
"""
Contains information about a Collection on the Hub.
Attributes:
slug (`str`):
Slug of the collection. E.g. `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
title (`str`):
Title of the collection. E.g. `"Recent models"`.
owner (`str`):
Owner of the collection. E.g. `"TheBloke"`.
items (`List[CollectionItem]`):
List of items in the collection.
last_updated (`datetime`):
Date of the last update of the collection.
position (`int`):
Position of the collection in the list of collections of the owner.
private (`bool`):
Whether the collection is private or not.
theme (`str`):
Theme of the collection. E.g. `"green"`.
upvotes (`int`):
Number of upvotes of the collection.
description (`str`, *optional*):
Description of the collection, as plain text.
url (`str`):
(property) URL of the collection on the Hub.
"""
slug: str
title: str
owner: str
items: List[CollectionItem]
last_updated: datetime
position: int
private: bool
theme: str
upvotes: int
description: Optional[str] = None
def __init__(self, **kwargs) -> None:
self.slug = kwargs.pop("slug")
self.title = kwargs.pop("title")
self.owner = kwargs.pop("owner")
self.items = [CollectionItem(**item) for item in kwargs.pop("items")]
self.last_updated = parse_datetime(kwargs.pop("lastUpdated"))
self.position = kwargs.pop("position")
self.private = kwargs.pop("private")
self.theme = kwargs.pop("theme")
self.upvotes = kwargs.pop("upvotes")
self.description = kwargs.pop("description", None)
endpoint = kwargs.pop("endpoint", None)
if endpoint is None:
endpoint = constants.ENDPOINT
self._url = f"{endpoint}/collections/{self.slug}"
@property
def url(self) -> str:
"""Returns the URL of the collection on the Hub."""
return self._url | class_definition | 42,752 | 44,887 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 66 |
class GitRefInfo:
"""
Contains information about a git reference for a repo on the Hub.
Attributes:
name (`str`):
Name of the reference (e.g. tag name or branch name).
ref (`str`):
Full git ref on the Hub (e.g. `"refs/heads/main"` or `"refs/tags/v1.0"`).
target_commit (`str`):
OID of the target commit for the ref (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`)
"""
name: str
ref: str
target_commit: str | class_definition | 44,901 | 45,399 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 67 |
class GitRefs:
"""
Contains information about all git references for a repo on the Hub.
Object is returned by [`list_repo_refs`].
Attributes:
branches (`List[GitRefInfo]`):
A list of [`GitRefInfo`] containing information about branches on the repo.
converts (`List[GitRefInfo]`):
A list of [`GitRefInfo`] containing information about "convert" refs on the repo.
Converts are refs used (internally) to push preprocessed data in Dataset repos.
tags (`List[GitRefInfo]`):
A list of [`GitRefInfo`] containing information about tags on the repo.
pull_requests (`List[GitRefInfo]`, *optional*):
A list of [`GitRefInfo`] containing information about pull requests on the repo.
Only returned if `include_prs=True` is set.
"""
branches: List[GitRefInfo]
converts: List[GitRefInfo]
tags: List[GitRefInfo]
pull_requests: Optional[List[GitRefInfo]] = None | class_definition | 45,413 | 46,399 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 68 |
class GitCommitInfo:
"""
Contains information about a git commit for a repo on the Hub. Check out [`list_repo_commits`] for more details.
Attributes:
commit_id (`str`):
OID of the commit (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`)
authors (`List[str]`):
List of authors of the commit.
created_at (`datetime`):
Datetime when the commit was created.
title (`str`):
Title of the commit. This is a free-text value entered by the authors.
message (`str`):
Description of the commit. This is a free-text value entered by the authors.
formatted_title (`str`):
Title of the commit formatted as HTML. Only returned if `formatted=True` is set.
formatted_message (`str`):
Description of the commit formatted as HTML. Only returned if `formatted=True` is set.
"""
commit_id: str
authors: List[str]
created_at: datetime
title: str
message: str
formatted_title: Optional[str]
formatted_message: Optional[str] | class_definition | 46,413 | 47,503 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 69 |
class UserLikes:
"""
Contains information about a user likes on the Hub.
Attributes:
user (`str`):
Name of the user for which we fetched the likes.
total (`int`):
Total number of likes.
datasets (`List[str]`):
List of datasets liked by the user (as repo_ids).
models (`List[str]`):
List of models liked by the user (as repo_ids).
spaces (`List[str]`):
List of spaces liked by the user (as repo_ids).
"""
# Metadata
user: str
total: int
# User likes
datasets: List[str]
models: List[str]
spaces: List[str] | class_definition | 47,517 | 48,168 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 70 |
class Organization:
"""
Contains information about an organization on the Hub.
Attributes:
avatar_url (`str`):
URL of the organization's avatar.
name (`str`):
Name of the organization on the Hub (unique).
fullname (`str`):
Organization's full name.
"""
avatar_url: str
name: str
fullname: str
def __init__(self, **kwargs) -> None:
self.avatar_url = kwargs.pop("avatarUrl", "")
self.name = kwargs.pop("name", "")
self.fullname = kwargs.pop("fullname", "")
# forward compatibility
self.__dict__.update(**kwargs) | class_definition | 48,182 | 48,827 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 71 |
class User:
"""
Contains information about a user on the Hub.
Attributes:
username (`str`):
Name of the user on the Hub (unique).
fullname (`str`):
User's full name.
avatar_url (`str`):
URL of the user's avatar.
details (`str`, *optional*):
User's details.
is_following (`bool`, *optional*):
Whether the authenticated user is following this user.
is_pro (`bool`, *optional*):
Whether the user is a pro user.
num_models (`int`, *optional*):
Number of models created by the user.
num_datasets (`int`, *optional*):
Number of datasets created by the user.
num_spaces (`int`, *optional*):
Number of spaces created by the user.
num_discussions (`int`, *optional*):
Number of discussions initiated by the user.
num_papers (`int`, *optional*):
Number of papers authored by the user.
num_upvotes (`int`, *optional*):
Number of upvotes received by the user.
num_likes (`int`, *optional*):
Number of likes given by the user.
num_following (`int`, *optional*):
Number of users this user is following.
num_followers (`int`, *optional*):
Number of users following this user.
orgs (list of [`Organization`]):
List of organizations the user is part of.
"""
# Metadata
username: str
fullname: str
avatar_url: str
details: Optional[str] = None
is_following: Optional[bool] = None
is_pro: Optional[bool] = None
num_models: Optional[int] = None
num_datasets: Optional[int] = None
num_spaces: Optional[int] = None
num_discussions: Optional[int] = None
num_papers: Optional[int] = None
num_upvotes: Optional[int] = None
num_likes: Optional[int] = None
num_following: Optional[int] = None
num_followers: Optional[int] = None
orgs: List[Organization] = field(default_factory=list)
def __init__(self, **kwargs) -> None:
self.username = kwargs.pop("user", "")
self.fullname = kwargs.pop("fullname", "")
self.avatar_url = kwargs.pop("avatarUrl", "")
self.is_following = kwargs.pop("isFollowing", None)
self.is_pro = kwargs.pop("isPro", None)
self.details = kwargs.pop("details", None)
self.num_models = kwargs.pop("numModels", None)
self.num_datasets = kwargs.pop("numDatasets", None)
self.num_spaces = kwargs.pop("numSpaces", None)
self.num_discussions = kwargs.pop("numDiscussions", None)
self.num_papers = kwargs.pop("numPapers", None)
self.num_upvotes = kwargs.pop("numUpvotes", None)
self.num_likes = kwargs.pop("numLikes", None)
self.num_following = kwargs.pop("numFollowing", None)
self.num_followers = kwargs.pop("numFollowers", None)
self.user_type = kwargs.pop("type", None)
self.orgs = [Organization(**org) for org in kwargs.pop("orgs", [])]
# forward compatibility
self.__dict__.update(**kwargs) | class_definition | 48,841 | 51,985 | 0 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/src/huggingface_hub/hf_api.py | null | 72 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 36