π SmolAgents v1.19.0 is live! This release brings major improvements to agent flexibility, UI usability, streaming architecture, and developer experience: making it easier than ever to build smart, interactive AI agents. Here's what's new:
π§ Agent Upgrades - Support for managed agents in ToolCallingAgent - Context manager support for cleaner agent lifecycle handling - Output formatting now uses XML tags for consistency
π₯οΈ UI Enhancements - GradioUI now supports reset_agent_memory: perfect for fresh starts in dev & demos.
π Streaming Refactor - Streaming event aggregation moved off the Model class - β‘οΈ Better architecture & maintainability
π¦ Output Tracking - CodeAgent outputs are now stored in ActionStep - β More visibility and structure to agent decisions
π Bug Fixes - Smarter planning logic - Cleaner Docker logs - Better prompt formatting for additional_args - Safer internal functions and final answer matching
π Docs Improvements - Added quickstart examples with tool usage - One-click Colab launch buttons - Expanded reference docs (AgentMemory, GradioUI docstrings) - Fixed broken links and migrated to .md format
Always surprised that so few people actually read the FineTasks blog, on β¨how to select training evals with the highest signalβ¨
If you're serious about training models without wasting compute on shitty runs, you absolutely should read it!!
An high signal eval actually tells you precisely, during training, how wel & what your model is learning, allowing you to discard the bad runs/bad samplings/...!
The blog covers in depth prompt choice, metrics, dataset, across languages/capabilities, and my fave section is "which properties should evals have"π (to know on your use case how to select the best evals for you)
New in smolagents v1.16.0: π Bing support in WebSearchTool π Custom functions & executor_kwargs in LocalPythonExecutor π§ Streaming GradioUI fixes π Local web agents via api_base & api_key π Better docs
smolagents v1.14.0 is out! π π MCPClient: A sleek new client for connecting to remote MCP servers, making integrations more flexible and scalable. πͺ¨ Amazon Bedrock: Native support for Bedrock-hosted models. SmolAgents is now more powerful, flexible, and enterprise-ready. πΌ
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.
At Hugging Faceβin robotics and across all AI fieldsβwe believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!
You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at pollen-robotics
We're so excited to build and share more open-source robots with the world in the coming months!
The new DeepSite space is really insane for vibe-coders enzostvs/deepsite
With the wave of vibe-coding-optimized LLMs like the latest open-source DeepSeek model (version V3-0324), you can basically prompt out-of-the-box and create any app and game in one-shot.
It feels so powerful to me, no more complex framework or under-the-hood prompt engineering to have a working text-to-app tool.
AI is eating the world and *open-source* AI is eating AI itself!
PS: and even more meta is that the DeepSite app and DeepSeek model are both fully open-source code => time to start recursively improve?
PPS: you still need some inference hosting unless you're running the 600B param model at home, so check the very nice list of HF Inference Providers for this model: deepseek-ai/DeepSeek-V3-0324
It's beating Claude 3.7 on (competitive) programming βa domain Anthropic has been historically really strong atβ and it's getting close to o1-mini/R1 on olympiad level coding with just 7B parameters!
Gemma3 family is out! Reading the tech report, and this section was really interesting to me from a methods/scientific fairness pov.
Instead of doing over-hyped comparisons, they clearly state that **results are reported in a setup which is advantageous to their models**. (Which everybody does, but people usually don't say)
For a tech report, it makes a lot of sense to report model performance when used optimally! On leaderboards on the other hand, comparison will be apples to apples, but in a potentially unoptimal way for a given model family (like some user interact sub-optimally with models)
Also contains a cool section (6) on training data memorization rate too! Important to see if your model will output the training data it has seen as such: always an issue for privacy/copyright/... but also very much for evaluation!
Because if your model knows its evals by heart, you're not testing for generalization.
π New smolagents update: Safer Local Python Execution! π¦Ύπ
With the latest release, we've added security checks to the local Python interpreter: every evaluation is now analyzed for dangerous builtins, modules, and functions. π
Here's why this matters & what you need to know! π§΅π
1οΈβ£ Why is local execution risky? β οΈ AI agents that run arbitrary Python code can unintentionally (or maliciously) access system files, run unsafe commands, or exfiltrate data.
2οΈβ£ New Safety Layer in smolagents π‘οΈ We now inspect every return value during execution: β Allowed: Safe built-in types (e.g., numbers, strings, lists) β Blocked: Dangerous functions/modules (e.g., os.system, subprocess, exec, shutil)
4οΈβ£ Security Disclaimer β οΈ π¨ Despite these improvements, local Python execution is NEVER 100% safe. π¨ If you need true isolation, use a remote sandboxed executor like Docker or E2B.
5οΈβ£ The Best Practice: Use Sandboxed Execution π For production-grade AI agents, we strongly recommend running code in a Docker or E2B sandbox to ensure complete isolation.
6οΈβ£ Upgrade Now & Stay Safe! π Check out the latest smolagents release and start building safer AI agents today.
π Big news for AI agents! With the latest release of smolagents, you can now securely execute Python code in sandboxed Docker or E2B environments. π¦Ύπ
Here's why this is a game-changer for agent-based systems: π§΅π
1οΈβ£ Security First π Running AI agents in unrestricted Python environments is risky! With sandboxing, your agents are isolated, preventing unintended file access, network abuse, or system modifications.
2οΈβ£ Deterministic & Reproducible Runs π¦ By running agents in containerized environments, you ensure that every execution happens in a controlled and predictable settingβno more environment mismatches or dependency issues!
3οΈβ£ Resource Control & Limits π¦ Docker and E2B allow you to enforce CPU, memory, and execution time limits, so rogue or inefficient agents donβt spiral out of control.
4οΈβ£ Safer Code Execution in Production π Deploy AI agents confidently, knowing that any generated code runs in an ephemeral, isolated environment, protecting your host machine and infrastructure.
5οΈβ£ Easy to Integrate π οΈ With smolagents, you can simply configure your agent to use Docker or E2B as its execution backendβno need for complex security setups!
6οΈβ£ Perfect for Autonomous AI Agents π€ If your AI agents generate and execute code dynamically, this is a must-have to avoid security pitfalls while enabling advanced automation.
π Introducing @huggingface Open Deep-Researchπ₯
In just 24 hours, we built an open-source agent that: β Autonomously browse the web β Search, scroll & extract info β Download & manipulate files β Run calculations on data