Hugging Face Science

company
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

science's activity

ariG23498ย 
posted an update 1 day ago
view post
Post
665
๐Ÿšจ Implement KV Cache from scratch in pure PyTorch. ๐Ÿšจ

We have documented all of our learning while implementing KV Cache to nanoVLM. Joint work with @kashif @lusxvr @andito @pcuenq

Blog: hf.co/blog/kv-cache
  • 1 reply
ยท
m-ricย 
posted an update 1 day ago
view post
Post
664
If you didn't yet, you should read the technical report for SmolVLA, published yesterday by the Hugging Face robotics team!
โžก๏ธ Amongst other ideas, it introduces "Async inference" to boost their robot actions.

Robots have a problem: performing the actions takes time (Unlike agents where action executions are near-instant!)
Most often, robots wait until they've finished performing actions to start thinking about hte next steps. This is a huge latency cost!

So the team decided to have the PolicyServer (aka the"thinking" part) restart early : instead of waiting for the n observations they just sent to be completed, they gather the observations after k < n steps, and start preparing the next actions based on that while the steps are running until n, to directly send their next steps.

โžก๏ธ This boosted robot throughput by ~30%! (nearly 2ร— tasks per time window).

gg @cadene and team! ๐Ÿ‘

Report here: SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics (2506.01844)
fdaudensย 
posted an update 2 days ago
view post
Post
222
This is the story of how open source AI created a $3M business for a news company:

Clare Spencer tells on the GAIN blog how a Danish software engineer found OpenAI's Whisper model and turned it into Good Tape. It's now generating $3M ARR for news service Zetland.

Great playbook on how to build a good product:
- This idea came from a software engineer, Jakob Steinn, who was not only able to spot a new model, but also listen to feedback from his colleagues in the newsrooms (he thought they would use it for translation, but they were more interested in transcription in Danish)
- They built iteratively: they went from running the model in the terminal to a notebook to a full-fledged web interface
- They didn't just wrap the API. They rebuilt the transcription engine from scratch, moved it to TPUs for 45-second processing of hour-long audio, and added EU-based data sovereignty

Now Good Tape has 2.5M users worldwide, with only 30-35% being journalists.
Small languages (Danish, Finnish, Croatian, Hebrew) were underserved by existing tools - suddenly there's a "very very big market" when you put them together.

This shows how open source AI can solve real workflow problems and create sustainable businesses. Sometimes the best opportunities emerge from solving your own daily problems.

Worth a read: https://generative-ai-newsroom.com/how-a-danish-news-service-made-a-profit-with-its-transcription-tool-285bc05b7cf9
fdaudensย 
posted an update 8 days ago
view post
Post
2850
๐ŸŽต Dream come true for content creators! TIGER AI can extract voice, effects & music from ANY audio file ๐Ÿคฏ
This lightweight model uses frequency band-split technology to separate speech like magic. Kudos to @fffiloni for the amazing demo! fffiloni/TIGER-audio-extraction
fdaudensย 
posted an update 10 days ago
view post
Post
3759
Just completed the AI Agents course and wow, that capstone project really makes you understand how to build agents that can handle real-world complexity!

The final project uses the GAIA dataset - your agent has to solve tasks like analyzing Excel files, processing audio recordings, answering questions about YouTube videos, and diving into research papers. This isn't toy examples, it's the messy, multimodal stuff agents need to handle in practice.

Whether youโ€™re just getting started with agents or want to go deeper with tools like LangChain, LlamaIndex, and SmolAgents, this course has tons of useful stuff. A few key insights:
- Code agents are incredibly versatile once you get the architecture right
- The sweet spot is finding the right balance of guidance vs autonomy for each use case
- Once the logic clicks, the possibilities really are endless - it's like letting LLMs break free from the chatbox

The course is free and the certification deadline is July 1st, 2025.

The Hugging Face team built something special here. If you're tired of AI that impresses in demos but fails in practice, this is your path to building agents that actually deliver. https://huggingface.co/learn/agents-course/unit0/introduction

Best part? There's the MCP course next!
m-ricย 
posted an update 10 days ago
view post
Post
2553
A new research paper from KAIST builds on smolagents to push boundaries of distillation ๐Ÿฅณ
โžก๏ธ "Distilling LLM Agent into Small Models with Retrieval and Code Tools" teaches that, when trying to distil reasoning capability from a strong LLM ("teacher") into a smaller one ("student"), it's much better to use Agent traces than CoT traces.

Advantages are:
1. Improved generalization
Intuitively, this is because your agent can encounter more "surprising" results by interacting with its environment : for example, a web research called by the LLM teacher in agent mode can bring results that the LLM teacher would not have generated in CoT.

2. Reduce hallucinations
The trace won't hallucinate tool call outputs!

Thank you @akseljoonas for mentioning this paper!
fdaudensย 
posted an update 12 days ago
view post
Post
2515
Two lines in your terminal and you have an AI agent running whatever model and tools you want ๐Ÿคฏ

Just tried the new Tiny Agents in Python. Asked it which team won the Italian Serie A soccer league and to export the final table to CSV. Coolest thing is you can interact with the agent, guide it, and correct its mistakes.

The agent connected to web browsing tools, searched for Serie A standings, identified the champion, and generated a CSV export.

The setup:
pip install "huggingface_hub[mcp]>=0.32.0"
tiny-agents run


That's it. The MCP protocol handles all the tool integrations automatically - no custom APIs to write, no complex setups. Want file system access? It's already there. Need web browsing? Built in.

You can swap models, change inference providers, run local models, or add new tools just by editing a simple JSON config. You can also use Gradio Spaces as MCP servers! The entire agent is ~70 lines of Python - essentially a while loop that streams responses and executes tools. Everything is open-source. โค๏ธ Hugging Face

Blog post: https://huggingface.co/blog/python-tiny-agents
  • 1 reply
ยท
fdaudensย 
posted an update 13 days ago
view post
Post
2438
Hereโ€™s what happens when a national institution builds its own digital intelligence: Franceโ€™s Ministry of Culture just released 17K+ real users testing 30+ chatbots in French. Raw, diverse, and a goldmine for studying LLMs in the wild.

ministere-culture/comparia-conversations
clefourrierย 
posted an update 18 days ago
view post
Post
595
Always surprised that so few people actually read the FineTasks blog, on
โœจhow to select training evals with the highest signalโœจ

If you're serious about training models without wasting compute on shitty runs, you absolutely should read it!!

An high signal eval actually tells you precisely, during training, how wel & what your model is learning, allowing you to discard the bad runs/bad samplings/...!

The blog covers in depth prompt choice, metrics, dataset, across languages/capabilities, and my fave section is "which properties should evals have"๐Ÿ‘Œ
(to know on your use case how to select the best evals for you)

Blog: HuggingFaceFW/blogpost-fine-tasks
  • 2 replies
ยท
loubnabnlย 
posted an update 20 days ago
fdaudensย 
posted an update 22 days ago
view post
Post
5122
Tried something new: an AI-generated podcast that breaks down the top research paper each day. Fully automated, now live on Spotify.

I built this prototype to help keep up with the rapid pace of AI developments and, hopefully, make cutting-edge research more accessible. I donโ€™t know about you, but just listening to a conversation about a paper really helps the content sink in for me.

This build taught me a lot about full automation. If youโ€™re into the technical weeds: Qwen3 runs on Inference to handle the script, Kokoro does the voice, and the whole thing gets published automatically thanks to the Hugging Face Jobs API and Gradio deployment.

Itโ€™s not perfect yet โ€” Iโ€™ll be monitoring for hallucinations and incoherence. The voice model still needs polish, but itโ€™s a promising start. Would love to build this with the community โ€” submit a PR or send feedback. Itโ€™s just a beta of an experimental idea!

Big kudos to @m-ric , whose Open NotebookLM this is based on, and to @nielsr for his terrific work making research papers more accessible.

- Podcast on Spotify: https://open.spotify.com/show/3PTucIW1w1GIkqTYm32ka7?si=c7a851f83e6d4331 (Apple Podcasts soon)
- Code: fdaudens/podcast-jobs
- Open NotebookLM: m-ric/open-notebooklm
- Also super helpful, @qgallouedec 's tutorial on HF Jobs API: qgallouedec/run-hello-world
  • 1 reply
ยท