We've kept registration open until the end of this week, so join and let's build cool stuff together as a community: ysharma/gradio-hackathon-registration-2025

Sorbonne Université
AI & ML interests
Welcome to the SCAI Hugging Face Space! 🎓🤖 Join us in our mission to advance interdisciplinary research and education in AI, fostering collaboration between researchers, students, and industry partners. Together, we’re shaping the future of artificial intelligence! 🚀🔬🌟 🔍 Vision: Dive into image recognition and perception, driving advancements in mathematics, computer science, and robotics. 🧠 Explanation/Explicability: Enhance the transparency of complex systems, with a focus on health and medicine. 🌍 Ethics: Develop ethical AI solutions for climate, environment, and the universe, ensuring responsible and sustainable practices. 📚 Digital Humanities: Discover how AI transforms our understanding of history, literature, and social sciences. #SorbonneAI #Innovation #EthicalAI #DigitalHumanities
SorbonneUniversity's activity
We've kept registration open until the end of this week, so join and let's build cool stuff together as a community: ysharma/gradio-hackathon-registration-2025
Gradio now supports MCP! If you want to convert an existing Space, like this one hexgrad/Kokoro-TTS, so that you can use it with Claude Desktop / Cursor / Cline / TinyAgents / or any LLM that supports MCP, here's all you need to do:
1. Duplicate the Space (in the Settings Tab)
2. Upgrade the Gradio
sdk_version
to 5.28
(in the README.md
)3. Set
mcp_server=True
in launch()
4. (Optionally) add docstrings to the function so that the LLM knows how to use it, like this:
def generate(text, speed=1):
"""
Convert text to speech audio.
Parameters:
text (str): The input text to be converted to speech.
speed (float, optional): Playback speed of the generated speech.
That's it! Now your LLM will be able to talk to you 🤯
If you don't already know, Gradio is an open-source Python library used to build interfaces for machine learning models. Beyond just creating UIs, Gradio also exposes API capabilities and now, Gradio apps can be launched Model Context Protocol (MCP) servers for LLMs.
If you already know how to use Gradio, there are only two additional things you need to do:
* Add standard docstrings to your function (these will be used to generate the descriptions for your tools for the LLM)
* Set
mcp_server=True
in launch()
Here's a complete example (make sure you already have the latest version of Gradio installed):
import gradio as gr
def letter_counter(word, letter):
"""Count the occurrences of a specific letter in a word.
Args:
word: The word or phrase to analyze
letter: The letter to count occurrences of
Returns:
The number of times the letter appears in the word
"""
return word.lower().count(letter.lower())
demo = gr.Interface(
fn=letter_counter,
inputs=["text", "text"],
outputs="number",
title="Letter Counter",
description="Count how many times a letter appears in a word"
)
demo.launch(mcp_server=True)
This is a very simple example, but you can add the ability to generate Ghibli images or speak emotions to any LLM that supports MCP. Once you have an MCP running locally, you can copy-paste the same app to host it on [Hugging Face Spaces](https://huggingface.co/spaces/) as well.
All free and open-source of course! Full tutorial: https://www.gradio.app/guides/building-mcp-server-with-gradio
5 years ago, we launched Gradio as a simple Python library to let researchers at Stanford easily demo computer vision models with a web interface.
Today, Gradio is used by >1 million developers each month to build and share AI web apps. This includes some of the most popular open-source projects of all time, like Automatic1111, Fooocus, Oobabooga’s Text WebUI, Dall-E Mini, and LLaMA-Factory.
How did we get here? How did Gradio keep growing in the very crowded field of open-source Python libraries? I get this question a lot from folks who are building their own open-source libraries. This post distills some of the lessons that I have learned over the past few years:
1. Invest in good primitives, not high-level abstractions
2. Embed virality directly into your library
3. Focus on a (growing) niche
4. Your only roadmap should be rapid iteration
5. Maximize ways users can consume your library's outputs
1. Invest in good primitives, not high-level abstractions
When we first launched Gradio, we offered only one high-level class (gr.Interface), which created a complete web app from a single Python function. We quickly realized that developers wanted to create other kinds of apps (e.g. multi-step workflows, chatbots, streaming applications), but as we started listing out the apps users wanted to build, we realized what we needed to do:
Read the rest here: https://x.com/abidlabs/status/1907886

Contextualized Topic Coherence Metrics
Demographic User Modeling for Social Robotics with Multimodal Pre-trained Models
USER-VLM 360: Personalized Vision Language Models with User-aware Tuning for Social Human-Robot Interactions

I'm excited to share that Gradio 5 will launch in October with improvements across security, performance, SEO, design (see the screenshot for Gradio 4 vs. Gradio 5), and user experience, making Gradio a mature framework for web-based ML applications.
Gradio 5 is currently in beta, so if you'd like to try it out early, please refer to the instructions below:
---------- Installation -------------
Gradio 5 depends on Python 3.10 or higher, so if you are running Gradio locally, please ensure that you have Python 3.10 or higher, or download it here: https://www.python.org/downloads/
* Locally: If you are running gradio locally, simply install the release candidate with
pip install gradio --pre
* Spaces: If you would like to update an existing gradio Space to use Gradio 5, you can simply update the
sdk_version
to be 5.0.0b3
in the README.md
file on Spaces.In most cases, that’s all you have to do to run Gradio 5.0. If you start your Gradio application, you should see your Gradio app running, with a fresh new UI.
-----------------------------
Fore more information, please see: https://github.com/gradio-app/gradio/issues/9463
We're working on making that a lot easier with 𝗚𝗿𝗮𝗱𝗶𝗼 and will unveil something new on June 6th: https://www.youtube.com/watch?v=44vi31hehw4&ab_channel=HuggingFace

-----------------------------------------------------------------------
If you're an ML researcher / scientist, you probably don't need much convincing to use open models instead of closed APIs -- open models give you reproducibility and let you deeply investigate the model's behavior.
But what if you are a software engineer building products on top of LLMs? I'd argue that open models are a much better option even if you are using them as APIs. For at least 3 reasons:
1) The most obvious reason is reliability of your product. Relying on a closed API means that your product has a single point-of-failure. On the other hand, there are at least 7 different API providers that offer Llama3 70B already. As well as libraries that abstract on top of these API providers so that you can make a single request that goes to different API providers depending on availability / latency.
2) Another benefit is eventual consistency going local. If your product takes off, it will be more economical and lower latency to have a dedicated inference endpoint running on your VPC than to call external APIs. If you've started with an open-source model, you can always deploy the same model locally. You don't need to modify prompts or change any surrounding logic to get consistent behavior. Minimize your technical debt from the beginning.
3) Finally, open models give you much more flexibility. Even if you keep using APIs, you might want to tradeoff latency vs. cost, or use APIs that support batches of inputs, etc. Because different API providers have different infrastructure, you can use the API provider that makes the most sense for your product -- or you can even use multiple API providers for different users (free vs. paid) or different parts of your product (priority features vs. nice-to-haves)

Sometimes we realize that we need a Gradio component to build a cool application and demo, so we just build it. For example, we just added a new gr.ParamViewer component because we needed it to display information about Python & JavaScript functions in our documentation.
Of course, our users should be able able to do the same thing for their machine learning applications, so that's why Gradio lets you build custom components, and publish them to the world 🔥
modal
component!from gradio_modal import Modal
with gr.Blocks() as demo:
gr.Markdown("### Main Page")
gr.Textbox("lorem ipsum " * 1000, lines=10)
with Modal(visible=True) as modal:
gr.Markdown("# License Agreement")