Spaces:
Running
title: MCP Blockly
emoji: 🧩
colorFrom: blue
colorTo: purple
sdk: docker
pinned: true
tags:
- mcp-in-action-track-creative
- mcp-in-action-track-consumer
- OpenAI
- Blockly
- Education
- MCP
short_description: AI that makes MCP servers with block-code.
MCP Blockly
MCP Blockly introduces a new kind of MCP development experience: a block-based Gradio 6 MCP server builder, powered by an autonomous AI agent that can understand your entire workspace, reason about the structure of your MCP tool, and act on it directly. It is one of the first publicly available AI agents that can perform multi-step editing using block-code. Give it a goal and it creates a plan, modifies or rebuilds your logic, tests the tool with real inputs, and checks the results. This is far beyond suggestion driven assistance. The agent can create blocks, repair or remove broken logic, construct complex nested structures, and even deploy the finished MCP server for you. While the interface feels familiar (intentionally resembling Scratch), the result is a visual editor that becomes a fully interactive and agent powered development platform.
Most educational tools demonstrate concepts passively, but MCP Blockly supports learning through an active, hands-on environment. Studies consistently show that students develop deeper understanding and longer term retention when they learn by doing, and MCP Blockly applies this idea by letting users experiment with real MCP logic while having an AI partner that can step in when needed, show alternative structures, edit the workspace to illustrate concepts, or help the learner understand how a tool should be built.
Vibe coding makes it easy to lean entirely on an AI assistant without gaining any real understanding, but in the context of MCP servers this often drops beginners, especially those coming from Scratch, into an unfamiliar world where they rely on generated code that feels like magic rather than something they can reason about. MCP Blockly takes a different approach by letting the AI work with you in a transparent, structured environment that shows how each block fits into the overall logic. This makes the assistant a guide rather than a crutch and helps learners develop genuine intuition about MCP development instead of staying dependent on vibe coded projects they don't understand.
This project uses the OpenAI Responses API for easy MCP integration, along with their excellent proprietary models which help the agent make smarter decisions.
You can read the announcement post on LinkedIn, along with the article about the project!
YouTube Video (click on image)
Setup
We ask for your Hugging Face API Key (suggested; optional), which allows you to deploy your MCP server as a real, live Gradio 6 MCP server on Hugging Face Spaces. The system creates a new Space and uploads your tool automatically. Once deployed, the tool acts as a real MCP endpoint that other AI systems can call. Without this key, you can build and test your tool in this space, but you will not be able to deploy unless you manually upload the generated code.
Set this key through the welcome menu, or File > API Keys before using features that depend on it!
What This Does
MCP Blockly lets you build Gradio 6 MCP servers using a block based interface, perfect for students and newcomers stepping into AI development. The core building happens in the workspace: you define your MCP inputs, arrange your logic with blocks, and choose what your server will return. Every change you make is reflected in live Python code on the side. The generator handles function signatures and MCP boilerplate automatically, so you never have to worry about syntax or configuration. Everything stays valid and synchronized.
The interface has three main areas. The workspace on the right is where you build by dragging and connecting blocks. On the left are two tabs for working with your project: the Testing tab, and an AI Assistant tab.
Additionally, there are 3 dropdowns in the top bar to aid you in development. The File menu handles creating new projects, opening existing ones, and downloading your work. You can download just the generated Python code or the entire project as a JSON file. The Edit menu provides standard undo, redo, and a cleanup button to reorganize blocks. The Examples menu includes pre-built projects you can load to understand how to structure your own.
Once your blocks are in place, the Testing tab makes testing simple. Once you refresh it, it automatically generates input fields based on your parameters, so you can run the MCP server logic instantly. You enter values, submit, and see the outputs appear. This kind of immediate feedback helps learners understand how data moves through their tool and builds intuition about how AI tools work.
The AI Assistant tab lets you build and refine your project through conversation. You can think of it as a conversational partner that helps you shape your MCP tool step by step. It's always there to explain concepts or code to you, help you develop your tool, and ensure your code runs without issues. The assistant can help you:
- Create or adjust your blocks.
- Introduce variables and expressions when you need them.
- Deploy your completed Gradio 6 MCP server to Hugging Face Spaces.
- Call the deployed tool to verify that everything works end to end.
- And much more!
The assistant is meant to collaborate with you, not to take over. It may occasionally misunderstand complex structures, and you can always correct or rearrange blocks manually. This keeps the experience grounded and aligned with learning and exploration.
Why This Matters
The goal of MCP Blockly is to empower the next generation of AI developers. By providing an environment that is both powerful and educational, we can do more than just help people build tools; we can help them build conceptual understanding for how they are built.
When a learner sees the AI assistant construct a program block by block, they are not just getting a solution: they are getting a live, narrated demonstration of the logical steps required to solve a problem. They can intervene at any time, modify the blocks themselves, and use the interactive testing tab to see how their changes affect the outcome.
This creates a powerful feedback loop that builds true, lasting intuition. It's a new way to learn, a new way to build, and a small step toward making the incredible power of AI accessible to everyone.
How It Works
This project expands Blockly with brand new features designed to make MCP development smoother and more intuitive. These changes range from enabling a smoother user experience, along with low-level hooks to allow the AI agent to perform its actions.
Backend
The core mechanism is a recursive Python code generator. When you connect blocks, the system walks through your structure and compiles it into Python code. Text blocks produce string literals, math operations produce arithmetic expressions, conditionals produce if/elif/else branches, and loops produce iteration logic. Your top-level MCP block becomes a function with typed parameters and return values.
When you test your tool, the generated code gets sent to a backend service that handles local testing and execution. It spins up a thread and parses your function signature, sees what types your parameters are, and generates input fields accordingly. Finally, it displays your results in the Gradio UI.
All of this runs through one unified web interface. The frontend communicates with both backends over HTTP for regular operations and Server-Sent Events for real-time AI updates. Your API keys are stored in localStorage and used to authenticate requests to OpenAI and Hugging Face during your session. They are never saved in the Python backend nor printed, and are immediately disposed of after being used.
AI Assistant
The assistant works by reading your workspace in a custom domain-specific language (DSL) created for this project, allowing for the AI to interact with a normally UI-based environment. Each block gets a unique ID marked with special delimiters, and its structure is described as nested function calls. For example, a text block might look like ↿ abc123 ↾ text(inputs(TEXT: "hello")), telling the AI what the block does and how it's configured. When you send a message, the AI receives your entire workspace in this format as context. It understands what operations are possible: it can construct new blocks described in the same nested syntax, request existing blocks be deleted by their ID, create variables, and more. These requests come back to your browser as instructions, which are executed immediately to update the visual workspace.
The agent does not simply think of blocks individually: it understands the complete structure of the workspace, including multi-branch blocks such as conditionals and nested logic constructs. For example, when the DSL describes an if block, the agent knows that it contains a condition input, a do branch, and an else branch, each of which expects different kinds of sub-blocks. The agent can independently modify any of these branches, insert new blocks into the correct slot, or replace just one part of a larger structure while preserving the rest. This structural awareness lets the assistant work reliably with arbitrarily deep or complex logic, because it always understands which positions in the workspace are valid targets for a given operation.
The assistant follows a multi-step planning pipeline whenever it works on the project. Each request begins with a high-level plan, followed by pseudocode, then a concrete checklist of operations. The agent executes each operation one at a time, updating its understanding of the workspace after every change by rereading the DSL. This loop continues for several iterations until the goal is complete or no further progress can be made. Because the assistant evaluates the workspace after each modification, it can adapt to new block layouts, recover from earlier mistakes, and take long sequences of small steps that ultimately create or transform complex logic. This approach often allows the agent to reliably perform edits that would be impossible to express in a single instruction.
MCP Blockly includes an error-catching layer that lets the AI correct its own mistakes while editing your workspace. If the assistant tries to place a block where it can't go, tries to use a tool incorrectly, or writes incorrect commands, the system returns a structured error that the agent reads and adapts to in the next step. It can retry operations, adjust its plan, and repair the workspace without always requiring the user to intervene. This allows multi-step goals to complete even when the initial attempt wasn't perfect.
When deployment happens, the latest generated Python code is packaged with its dependencies and uploaded to Hugging Face Spaces. The system waits for the space to build (typically 1-2 minutes), then registers it as a live MCP server. From that point, the AI can call your deployed Gradio 6 MCP server directly with real data, getting results from the production version rather than the local one.