|
To do so, just call push_to_hub on the tool variable: |
|
python |
|
tool.push_to_hub("hf-model-downloads") |
|
You now have your code on the Hub! Let's take a look at the final step, which is to have the agent use it. |
|
Having the agent use the tool |
|
We now have our tool that lives on the Hub which can be instantiated as such (change the user name for your tool): |
|
thon |
|
from transformers import load_tool |
|
tool = load_tool("lysandre/hf-model-downloads") |
|
|
|
In order to use it in the agent, simply pass it in the additional_tools parameter of the agent initialization method: |
|
thon |
|
from transformers import HfAgent |
|
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) |
|
agent.run( |
|
"Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" |
|
) |
|
which outputs the following:text |
|
==Code generated by the agent== |
|
model = model_download_counter(task="text-to-video") |
|
print(f"The model with the most downloads is {model}.") |
|
audio_model = text_reader(model) |
|
==Result== |
|
The model with the most downloads is damo-vilab/text-to-video-ms-1.7b. |
|
|
|
and generates the following audio. |
|
| Audio | |
|
|------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
| | |
|
|
|
Depending on the LLM, some are quite brittle and require very exact prompts in order to work well. |