Nymbo PRO
Nymbo
AI & ML interests
I like unrestricted, free, utilitarian stuff. I tend to archive good spaces because I don't trust y'all to keep them good :) Most spaces with runtime errors just need a restart. Paused spaces should work but require a GPU, duplicate to use.
Recent Activity
upvoted
a
changelog
about 7 hours ago
New Inference Providers Dashboard
upvoted
a
collection
about 9 hours ago
Qwen3-Embedding
updated
a Space
about 23 hours ago
Nymbo/conversational-webgpu
Organizations
Nymbo's activity

reacted to
clem's
post with 🔥🤗🚀
7 days ago

reacted to
clem's
post with 🤗
10 days ago
Post
3213
It's just become easier to share your apps on the biggest AI app store (aka HF spaces) for unlimited storage, more visibility and community interactions.
Just pick a React, Svelte, or Vue template when you create your space or add
Or follow this link: https://huggingface.co/new-space?sdk=static
Let's build!
Just pick a React, Svelte, or Vue template when you create your space or add
app_build_command: npm run build
in your README's YAML and app_file: build/index.html
in your README's YAML block.Or follow this link: https://huggingface.co/new-space?sdk=static
Let's build!

reacted to
abidlabs's
post with ❤️
20 days ago
Post
4924
HOW TO ADD MCP SUPPORT TO ANY 🤗 SPACE
Gradio now supports MCP! If you want to convert an existing Space, like this one hexgrad/Kokoro-TTS, so that you can use it with Claude Desktop / Cursor / Cline / TinyAgents / or any LLM that supports MCP, here's all you need to do:
1. Duplicate the Space (in the Settings Tab)
2. Upgrade the Gradio
3. Set
4. (Optionally) add docstrings to the function so that the LLM knows how to use it, like this:
That's it! Now your LLM will be able to talk to you 🤯
Gradio now supports MCP! If you want to convert an existing Space, like this one hexgrad/Kokoro-TTS, so that you can use it with Claude Desktop / Cursor / Cline / TinyAgents / or any LLM that supports MCP, here's all you need to do:
1. Duplicate the Space (in the Settings Tab)
2. Upgrade the Gradio
sdk_version
to 5.28
(in the README.md
)3. Set
mcp_server=True
in launch()
4. (Optionally) add docstrings to the function so that the LLM knows how to use it, like this:
def generate(text, speed=1):
"""
Convert text to speech audio.
Parameters:
text (str): The input text to be converted to speech.
speed (float, optional): Playback speed of the generated speech.
That's it! Now your LLM will be able to talk to you 🤯

reacted to
albertvillanova's
post with 😎🔥🤗
20 days ago
Post
2412
New in smolagents v1.16.0:
🔍 Bing support in WebSearchTool
🐍 Custom functions & executor_kwargs in LocalPythonExecutor
🔧 Streaming GradioUI fixes
🌐 Local web agents via api_base & api_key
📚 Better docs
👉 https://github.com/huggingface/smolagents/releases/tag/v1.16.0
🔍 Bing support in WebSearchTool
🐍 Custom functions & executor_kwargs in LocalPythonExecutor
🔧 Streaming GradioUI fixes
🌐 Local web agents via api_base & api_key
📚 Better docs
👉 https://github.com/huggingface/smolagents/releases/tag/v1.16.0

reacted to
cbensimon's
post with 🔥
20 days ago
Post
5728
🚀 ZeroGPU
Nothing too fancy for now—ZeroGPU Spaces still default to
- 💰 size-based quotas / pricing (
- 🦣 the upcoming
You can as of now control GPU size via a Space variable. Accepted values:
-
-
-
The auto mode checks total CUDA tensor size during startup:
- More than 30GB →
- Otherwise →
medium
size is now available as a power-user featureNothing too fancy for now—ZeroGPU Spaces still default to
large
(70GB VRAM)—but this paves the way for:- 💰 size-based quotas / pricing (
medium
will offer significantly more usage than large
)- 🦣 the upcoming
xlarge
size (141GB VRAM)You can as of now control GPU size via a Space variable. Accepted values:
-
auto
(future default)-
medium
-
large
(current default)The auto mode checks total CUDA tensor size during startup:
- More than 30GB →
large
- Otherwise →
medium

posted
an
update
about 1 month ago
Post
2025
PSA for anyone using
Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and
If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
Nymbo/Nymbo_Theme
or Nymbo/Nymbo_Theme_5
in a Gradio space ~Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and
in-line code
is readable now! Both themes are now visually identical across versions.If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.

reacted to
Xenova's
post with 🔥
about 1 month ago
Post
8073
Introducing the ONNX model explorer: Browse, search, and visualize neural networks directly in your browser. 🤯 A great tool for anyone studying Machine Learning! We're also releasing the entire dataset of graphs so you can use them in your own projects! 🤗
Check it out! 👇
Demo: onnx-community/model-explorer
Dataset: onnx-community/model-explorer
Source code: https://github.com/xenova/model-explorer
Check it out! 👇
Demo: onnx-community/model-explorer
Dataset: onnx-community/model-explorer
Source code: https://github.com/xenova/model-explorer

reacted to
fantos's
post with 🔥
about 1 month ago
Post
4274
🎨 BadgeCraft: Create Beautiful Badges with Ease! ✨
Hello there! Today I'm introducing BadgeCraft, a simple app that lets you create stunning badges for your websites, GitHub READMEs, and documentation.
🌟 Key Features
🖌️ 14 diverse color options including vibrant neon colors
🔤 Custom text input for label and message
🖼️ Support for 2000+ logos via Simple Icons
🔗 Clickable link integration
👁️ Real-time preview
💻 Ready-to-use HTML code generation
📝 How to Use
Label - Enter the text to display on the left side of the badge (e.g., "Discord", "Version", "Status")
Message - Enter the text to display on the right side of the badge
Logo - Type the name of a logo provided by Simple Icons (e.g., "discord", "github")
Style - Choose the shape of your badge (flat, plastic, for-the-badge, etc.)
Color Settings - Select background color, label background color, and logo color
Link - Enter the URL that the badge will link to when clicked
✅ Use Cases
Add social media links to your GitHub project README
Display version information or download links on your website
Include tech stack badges in blog posts
Show status indicators in documentation (e.g., "in development", "stable")
💡 Tips
Click on any of the prepared examples to automatically fill in all settings
Copy the generated HTML code and paste directly into your website or blog
HTML works in GitHub READMEs, but if you prefer markdown, use the  format
👨💻 Tech Stack
This app was built using Gradio and leverages the shields.io API to generate badges. Its simple UI makes it accessible for everyone!
🔗 openfree/Badge
✨ Available under MIT License - feel free to use and modify.
Hello there! Today I'm introducing BadgeCraft, a simple app that lets you create stunning badges for your websites, GitHub READMEs, and documentation.
🌟 Key Features
🖌️ 14 diverse color options including vibrant neon colors
🔤 Custom text input for label and message
🖼️ Support for 2000+ logos via Simple Icons
🔗 Clickable link integration
👁️ Real-time preview
💻 Ready-to-use HTML code generation
📝 How to Use
Label - Enter the text to display on the left side of the badge (e.g., "Discord", "Version", "Status")
Message - Enter the text to display on the right side of the badge
Logo - Type the name of a logo provided by Simple Icons (e.g., "discord", "github")
Style - Choose the shape of your badge (flat, plastic, for-the-badge, etc.)
Color Settings - Select background color, label background color, and logo color
Link - Enter the URL that the badge will link to when clicked
✅ Use Cases
Add social media links to your GitHub project README
Display version information or download links on your website
Include tech stack badges in blog posts
Show status indicators in documentation (e.g., "in development", "stable")
💡 Tips
Click on any of the prepared examples to automatically fill in all settings
Copy the generated HTML code and paste directly into your website or blog
HTML works in GitHub READMEs, but if you prefer markdown, use the  format
👨💻 Tech Stack
This app was built using Gradio and leverages the shields.io API to generate badges. Its simple UI makes it accessible for everyone!
🔗 openfree/Badge
✨ Available under MIT License - feel free to use and modify.

reacted to
S-Dreamer's
post with 👍
about 2 months ago
Post
1927
PiFlash
A simple web-based tool to flash Raspberry Pi OS images to your SD cards. No additional software required!
S-Dreamer/piflash
A simple web-based tool to flash Raspberry Pi OS images to your SD cards. No additional software required!
S-Dreamer/piflash

reacted to
hesamation's
post with ❤️
about 2 months ago
Post
9507
Google published a 69-page whitepaper on Prompt Engineering and its best practices, a must-read if you are using LLMs in production:
> zero-shot, one-shot, few-shot
> system prompting
> chain-of-thought (CoT)
> ReAct
LINK: https://www.kaggle.com/whitepaper-prompt-engineering
> code prompting
> best practices
> zero-shot, one-shot, few-shot
> system prompting
> chain-of-thought (CoT)
> ReAct
LINK: https://www.kaggle.com/whitepaper-prompt-engineering
> code prompting
> best practices

reacted to
thomwolf's
post with ❤️
2 months ago
Post
3497
The new DeepSite space is really insane for vibe-coders
enzostvs/deepsite
With the wave of vibe-coding-optimized LLMs like the latest open-source DeepSeek model (version V3-0324), you can basically prompt out-of-the-box and create any app and game in one-shot.
It feels so powerful to me, no more complex framework or under-the-hood prompt engineering to have a working text-to-app tool.
AI is eating the world and *open-source* AI is eating AI itself!
PS: and even more meta is that the DeepSite app and DeepSeek model are both fully open-source code => time to start recursively improve?
PPS: you still need some inference hosting unless you're running the 600B param model at home, so check the very nice list of HF Inference Providers for this model: deepseek-ai/DeepSeek-V3-0324
enzostvs/deepsite
With the wave of vibe-coding-optimized LLMs like the latest open-source DeepSeek model (version V3-0324), you can basically prompt out-of-the-box and create any app and game in one-shot.
It feels so powerful to me, no more complex framework or under-the-hood prompt engineering to have a working text-to-app tool.
AI is eating the world and *open-source* AI is eating AI itself!
PS: and even more meta is that the DeepSite app and DeepSeek model are both fully open-source code => time to start recursively improve?
PPS: you still need some inference hosting unless you're running the 600B param model at home, so check the very nice list of HF Inference Providers for this model: deepseek-ai/DeepSeek-V3-0324

reacted to
aiqtech's
post with 🤝😔🤯👍
2 months ago
Post
5395
🤗 Hug Contributors
Hugging Face Contributor Dashboard 👨💻👩💻
aiqtech/Contributors-Leaderboard
📊 Key Features
Contributor Activity Tracking: Visualize yearly and monthly contributions through interactive calendars
Top 100 Rankings: Provide rankings based on models, spaces, and dataset contributions
Detailed Analysis: Analyze user-specific contribution patterns and influence
Visualization: Understand contribution activities at a glance through intuitive charts and graphs
🌟 Core Visualization Elements
Contribution Calendar: Track activity patterns with GitHub-style heatmaps
Radar Chart: Visualize balance between models, spaces, datasets, and activity levels
Monthly Activity Graph: Identify most active months and patterns
Distribution Pie Chart: Analyze proportion by contribution type
🏆 Ranking System
Rankings based on overall contributions, spaces, and models
Automatic badges for top 10, 30, and 100 contributors
Ranking visualization to understand your position in the community
💡 How to Use
Select a username from the sidebar or enter directly
Choose a year to view specific period activities
Select desired items from models, datasets, and spaces
View comprehensive contribution activities in the detailed dashboard
🚀 Expected Benefits
Provide transparency for Hugging Face community contributors' activities
Motivate contributions and energize the community
Recognize and reward active contributors
Visualize contributions to the open AI ecosystem
Hugging Face Contributor Dashboard 👨💻👩💻
aiqtech/Contributors-Leaderboard
📊 Key Features
Contributor Activity Tracking: Visualize yearly and monthly contributions through interactive calendars
Top 100 Rankings: Provide rankings based on models, spaces, and dataset contributions
Detailed Analysis: Analyze user-specific contribution patterns and influence
Visualization: Understand contribution activities at a glance through intuitive charts and graphs
🌟 Core Visualization Elements
Contribution Calendar: Track activity patterns with GitHub-style heatmaps
Radar Chart: Visualize balance between models, spaces, datasets, and activity levels
Monthly Activity Graph: Identify most active months and patterns
Distribution Pie Chart: Analyze proportion by contribution type
🏆 Ranking System
Rankings based on overall contributions, spaces, and models
Automatic badges for top 10, 30, and 100 contributors
Ranking visualization to understand your position in the community
💡 How to Use
Select a username from the sidebar or enter directly
Choose a year to view specific period activities
Select desired items from models, datasets, and spaces
View comprehensive contribution activities in the detailed dashboard
🚀 Expected Benefits
Provide transparency for Hugging Face community contributors' activities
Motivate contributions and energize the community
Recognize and reward active contributors
Visualize contributions to the open AI ecosystem