If you didn't yet, you should read the technical report for SmolVLA, published yesterday by the Hugging Face robotics team! โก๏ธ Amongst other ideas, it introduces "Async inference" to boost their robot actions.
Robots have a problem: performing the actions takes time (Unlike agents where action executions are near-instant!) Most often, robots wait until they've finished performing actions to start thinking about hte next steps. This is a huge latency cost!
So the team decided to have the PolicyServer (aka the"thinking" part) restart early : instead of waiting for the n observations they just sent to be completed, they gather the observations after k < n steps, and start preparing the next actions based on that while the steps are running until n, to directly send their next steps.
โก๏ธ This boosted robot throughput by ~30%! (nearly 2ร tasks per time window).
This is the story of how open source AI created a $3M business for a news company:
Clare Spencer tells on the GAIN blog how a Danish software engineer found OpenAI's Whisper model and turned it into Good Tape. It's now generating $3M ARR for news service Zetland.
Great playbook on how to build a good product: - This idea came from a software engineer, Jakob Steinn, who was not only able to spot a new model, but also listen to feedback from his colleagues in the newsrooms (he thought they would use it for translation, but they were more interested in transcription in Danish) - They built iteratively: they went from running the model in the terminal to a notebook to a full-fledged web interface - They didn't just wrap the API. They rebuilt the transcription engine from scratch, moved it to TPUs for 45-second processing of hour-long audio, and added EU-based data sovereignty
Now Good Tape has 2.5M users worldwide, with only 30-35% being journalists. Small languages (Danish, Finnish, Croatian, Hebrew) were underserved by existing tools - suddenly there's a "very very big market" when you put them together.
This shows how open source AI can solve real workflow problems and create sustainable businesses. Sometimes the best opportunities emerge from solving your own daily problems.
๐ต Dream come true for content creators! TIGER AI can extract voice, effects & music from ANY audio file ๐คฏ This lightweight model uses frequency band-split technology to separate speech like magic. Kudos to @fffiloni for the amazing demo! fffiloni/TIGER-audio-extraction
Just completed the AI Agents course and wow, that capstone project really makes you understand how to build agents that can handle real-world complexity!
The final project uses the GAIA dataset - your agent has to solve tasks like analyzing Excel files, processing audio recordings, answering questions about YouTube videos, and diving into research papers. This isn't toy examples, it's the messy, multimodal stuff agents need to handle in practice.
Whether youโre just getting started with agents or want to go deeper with tools like LangChain, LlamaIndex, and SmolAgents, this course has tons of useful stuff. A few key insights: - Code agents are incredibly versatile once you get the architecture right - The sweet spot is finding the right balance of guidance vs autonomy for each use case - Once the logic clicks, the possibilities really are endless - it's like letting LLMs break free from the chatbox
The course is free and the certification deadline is July 1st, 2025.
A new research paper from KAIST builds on smolagents to push boundaries of distillation ๐ฅณ โก๏ธ "Distilling LLM Agent into Small Models with Retrieval and Code Tools" teaches that, when trying to distil reasoning capability from a strong LLM ("teacher") into a smaller one ("student"), it's much better to use Agent traces than CoT traces.
Advantages are: 1. Improved generalization Intuitively, this is because your agent can encounter more "surprising" results by interacting with its environment : for example, a web research called by the LLM teacher in agent mode can bring results that the LLM teacher would not have generated in CoT.
2. Reduce hallucinations The trace won't hallucinate tool call outputs!
Two lines in your terminal and you have an AI agent running whatever model and tools you want ๐คฏ
Just tried the new Tiny Agents in Python. Asked it which team won the Italian Serie A soccer league and to export the final table to CSV. Coolest thing is you can interact with the agent, guide it, and correct its mistakes.
The agent connected to web browsing tools, searched for Serie A standings, identified the champion, and generated a CSV export.
The setup:
pip install "huggingface_hub[mcp]>=0.32.0"
tiny-agents run
That's it. The MCP protocol handles all the tool integrations automatically - no custom APIs to write, no complex setups. Want file system access? It's already there. Need web browsing? Built in.
You can swap models, change inference providers, run local models, or add new tools just by editing a simple JSON config. You can also use Gradio Spaces as MCP servers! The entire agent is ~70 lines of Python - essentially a while loop that streams responses and executes tools. Everything is open-source. โค๏ธ Hugging Face
Hereโs what happens when a national institution builds its own digital intelligence: Franceโs Ministry of Culture just released 17K+ real users testing 30+ chatbots in French. Raw, diverse, and a goldmine for studying LLMs in the wild.
- Itโs still free! - Video 1 walks you through onboarding to the course - The first live session is next week! - You can now get a certificate via exam app - We improved and written material with interactive quizzes
If youโre studying MCP and want a live, interactive, visual, certified course, then join us on the hub!
Yesterday, we dropped a new conversational viewer for datasets on the hub! ๐ฌ
Actually being able to view and inspect your data is extremely important. This is a big step in making data more accessible and actionable for everyone.
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! ๐ฅ
as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!
in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.
Always surprised that so few people actually read the FineTasks blog, on โจhow to select training evals with the highest signalโจ
If you're serious about training models without wasting compute on shitty runs, you absolutely should read it!!
An high signal eval actually tells you precisely, during training, how wel & what your model is learning, allowing you to discard the bad runs/bad samplings/...!
The blog covers in depth prompt choice, metrics, dataset, across languages/capabilities, and my fave section is "which properties should evals have"๐ (to know on your use case how to select the best evals for you)
We're thrilled to announce the launch of our comprehensive Model Context Protocol (MCP) Course! This free program is designed to take learners from foundational understanding to practical application of MCP in AI.
In this course, you will: ๐ Study Model Context Protocol in theory, design, and practice. ๐งโ๐ป Learn to use established MCP SDKs and frameworks. ๐พ Share your projects and explore applications created by the community. ๐ Participate in challenges and evaluate your MCP implementations. ๐ Earn a certificate of completion.
At the end of this course, you'll understand how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards.
Tried something new: an AI-generated podcast that breaks down the top research paper each day. Fully automated, now live on Spotify.
I built this prototype to help keep up with the rapid pace of AI developments and, hopefully, make cutting-edge research more accessible. I donโt know about you, but just listening to a conversation about a paper really helps the content sink in for me.
This build taught me a lot about full automation. If youโre into the technical weeds: Qwen3 runs on Inference to handle the script, Kokoro does the voice, and the whole thing gets published automatically thanks to the Hugging Face Jobs API and Gradio deployment.
Itโs not perfect yet โ Iโll be monitoring for hallucinations and incoherence. The voice model still needs polish, but itโs a promising start. Would love to build this with the community โ submit a PR or send feedback. Itโs just a beta of an experimental idea!
Big kudos to @m-ric, whose Open NotebookLM this is based on, and to @nielsr for his terrific work making research papers more accessible.
Recent RL paradigms often relied on a set of questions an answers that needs to be manually curated. Researchers from Tsinghua University went like "why though".
๐ค Indeed, why learn from question designed by a human teacher, when the model can start from their base knowledge and learn by experimenting in a code environment, proposing coding tasks themselves and trying to solve them?
Thus they created โAbsolute Zero Reasoningโ (AZR), an approach that removes any need for human curated data. ๐ญ ๐๐๐ฎ๐น ๐ฟ๐ผ๐น๐ฒ๐: โฃ Proposer: Generates challenging but solvable coding tasks โฃ Solver: Attempts to solve those self-proposed tasks
๐งช ๐ง๐ต๐ฟ๐ฒ๐ฒ ๐๐ฎ๐๐ธ ๐๐๐ฝ๐ฒ๐: all types are defined as triplets of program, input and output โฃ Deduction: Give model an input and program, it must deduce the output โฃ Abduction: Give model an program and output, it must find the input that gave said output โฃ Induction: Synthesize a program from input/output pairs Btw this reminded me of my long-forgotten philosophy classes: Aristotle was more on the induction side, learning from real-world analogies, while Plato was more on the deduction side, trying to progress quite far with just one input and his reasoning.
๐ ๐ฅ๐ฒ๐๐๐น๐๐: โฃ AZR post-training creates a nice improvement on known models like Qwen2.5-7B โฃ Shows strong cross-domain transfer: coding โ๏ธ math reasoning
๐ง ๐ข๐๐ต๐ฒ๐ฟ ๐ณ๐ถ๐ป๐ฑ๐ถ๐ป๐ด๐: โฃ Having a better base performance (general or code specific) amplify the gains from Absolute Zero Reasoning โฃ Researchers warn about "Uh-oh moments" (winking to the "aha moments" of DeepSeek) where the model generates concerning goals like "make an extremely convoluted code to outsmart all these humans": so supervision is still needed!
Hey! I built an AI Agent to query the FOIA API for a workshop at the Hacks/Hackers Summit in Baltimore and you can do it too!
Itโs a quick proof of concept to demo what agents can do, how to design workflows, and how to approach the coding side. TWant a fun project to learn how AI agents work? I built one that queries the FOIA API โ and you can too!
It's a quick proof of concept I did for a workshop at the Hacks/Hackers Summit in Baltimore, demonstrating what agents can do, how to design workflows, and approaches to coding them.