Spaces:
Running
Running
owenkaplinsky
commited on
Commit
·
beb3ed2
1
Parent(s):
2d05ba7
Add HF deployment
Browse files- .gitignore +4 -1
- README.md +6 -6
- project/__pycache__/chat.cpython-311.pyc +0 -0
- project/__pycache__/test.cpython-311.pyc +0 -0
- project/chat.py +157 -8
- project/dist/bundle.js +0 -0
- project/dist/bundle.js.LICENSE.txt +0 -5
- project/dist/bundle.js.map +0 -0
- project/dist/index.html +0 -114
- project/src/generators/chat.js +1 -1
- project/src/index.html +15 -8
- project/src/index.js +23 -14
- project/test.py +30 -16
- requirements.txt +3 -0
.gitignore
CHANGED
|
@@ -1,4 +1,7 @@
|
|
| 1 |
.env
|
| 2 |
node_modules
|
| 3 |
.github
|
| 4 |
-
.gitignore
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
.env
|
| 2 |
node_modules
|
| 3 |
.github
|
| 4 |
+
.gitignore
|
| 5 |
+
__pycache__/
|
| 6 |
+
*.pyc
|
| 7 |
+
dist/
|
README.md
CHANGED
|
@@ -6,11 +6,11 @@ MCP Blockly is a visual programming environment for building AI tools. Instead o
|
|
| 6 |
|
| 7 |
MCP Blockly lets you build Model Context Protocol (MCP) servers using a block-based interface. You define what inputs your tool needs, add blocks that perform operations (calling APIs, parsing data, executing language models), and specify what your tool outputs. The system generates Python code from your block arrangement and provides a testing interface to verify your work.
|
| 8 |
|
| 9 |
-
The interface has three main areas. The canvas on the left is where you build by dragging and connecting blocks. On the right are two tabs for working with your project: the
|
| 10 |
|
| 11 |
-
In the
|
| 12 |
|
| 13 |
-
The AI
|
| 14 |
|
| 15 |
The assistant can:
|
| 16 |
- Create new blocks from plain language. Describe what you want to add, and it builds the correct structure automatically.
|
|
@@ -51,7 +51,7 @@ After that, it will open a tab in your browser and you can start building!
|
|
| 51 |
|
| 52 |
## How It Works
|
| 53 |
|
| 54 |
-
The system has three main components: the frontend Blockly editor, the backend Python services, and the AI
|
| 55 |
|
| 56 |
When you arrange blocks in the editor, change listeners trigger code generation. The JavaScript generator traverses your block tree and outputs Python code that represents your workflow. Each block type has a corresponding generator function that knows how to output Python for that block. These functions compose recursively, building the complete function definition from your block arrangement. The generated code is sent to the backend via HTTP POST and stored in memory.
|
| 57 |
|
|
@@ -59,10 +59,10 @@ Blocks dynamically manage their input and output ports through Blockly's mutator
|
|
| 59 |
|
| 60 |
Code execution happens in a sandboxed Python environment. User code is executed with restricted builtins and a clean state for each run. The system captures return values and displays them in the test interface.
|
| 61 |
|
| 62 |
-
The AI
|
| 63 |
|
| 64 |
Based on the model's response, the system recognizes three special commands: run to execute your MCP with sample inputs, delete to remove a block by ID, and create to add new blocks to your workspace. When the model issues these commands, they're executed immediately. For block modifications, the system uses Server-Sent Events to stream commands back to the frontend, which creates or deletes blocks in real time while you watch. This maintains real-time synchronization between the chat interface and the visual editor.
|
| 65 |
|
| 66 |
-
The AI
|
| 67 |
|
| 68 |
API keys are managed through environment variables set at runtime. The system uses Gradio to automatically generate user interfaces based on the function signatures in your generated code, creating input and output fields that match your tool's parameters.
|
|
|
|
| 6 |
|
| 7 |
MCP Blockly lets you build Model Context Protocol (MCP) servers using a block-based interface. You define what inputs your tool needs, add blocks that perform operations (calling APIs, parsing data, executing language models), and specify what your tool outputs. The system generates Python code from your block arrangement and provides a testing interface to verify your work.
|
| 8 |
|
| 9 |
+
The interface has three main areas. The canvas on the left is where you build by dragging and connecting blocks. On the right are two tabs for working with your project: the Testing tab, and an AI Assistant tab.
|
| 10 |
|
| 11 |
+
In the Testing tab, you see your generated Python code alongside a test interface. The interface automatically creates input fields matching your tool's parameters. After you arrange your blocks, click Refresh to update the test interface, enter values, and click Submit to run your code. Results appear in the output fields.
|
| 12 |
|
| 13 |
+
The AI Assistant tab lets you build and refine your project through conversation. It understands your workspace and can turn natural instructions into real changes inside the editor.
|
| 14 |
|
| 15 |
The assistant can:
|
| 16 |
- Create new blocks from plain language. Describe what you want to add, and it builds the correct structure automatically.
|
|
|
|
| 51 |
|
| 52 |
## How It Works
|
| 53 |
|
| 54 |
+
The system has three main components: the frontend Blockly editor, the backend Python services, and the AI Assistant engine.
|
| 55 |
|
| 56 |
When you arrange blocks in the editor, change listeners trigger code generation. The JavaScript generator traverses your block tree and outputs Python code that represents your workflow. Each block type has a corresponding generator function that knows how to output Python for that block. These functions compose recursively, building the complete function definition from your block arrangement. The generated code is sent to the backend via HTTP POST and stored in memory.
|
| 57 |
|
|
|
|
| 59 |
|
| 60 |
Code execution happens in a sandboxed Python environment. User code is executed with restricted builtins and a clean state for each run. The system captures return values and displays them in the test interface.
|
| 61 |
|
| 62 |
+
The AI Assistant component is the sophisticated heart of the system. It continuously monitors the current workspace state and code. When you send a message, the system formats your entire block structure into a readable representation and includes it in the context sent to OpenAI. The model receives not just your question but a complete understanding of what you've built. The system includes a detailed system prompt that explains MCP concepts, the block syntax, and what actions the model can perform.
|
| 63 |
|
| 64 |
Based on the model's response, the system recognizes three special commands: run to execute your MCP with sample inputs, delete to remove a block by ID, and create to add new blocks to your workspace. When the model issues these commands, they're executed immediately. For block modifications, the system uses Server-Sent Events to stream commands back to the frontend, which creates or deletes blocks in real time while you watch. This maintains real-time synchronization between the chat interface and the visual editor.
|
| 65 |
|
| 66 |
+
The AI Assistant can execute multiple actions per conversation turn. If the model decides it needs to run your code to see the result before suggesting improvements, it does that automatically. If it needs to delete a broken block and create a replacement, it performs both operations and then reports back with what happened. This looping continues for up to five consecutive iterations per user message, allowing the AI to progressively refine your blocks without requiring you to send multiple messages.
|
| 67 |
|
| 68 |
API keys are managed through environment variables set at runtime. The system uses Gradio to automatically generate user interfaces based on the function signatures in your generated code, creating input and output fields that match your tool's parameters.
|
project/__pycache__/chat.cpython-311.pyc
DELETED
|
Binary file (40.1 kB)
|
|
|
project/__pycache__/test.cpython-311.pyc
DELETED
|
Binary file (13.5 kB)
|
|
|
project/chat.py
CHANGED
|
@@ -17,8 +17,9 @@ from colorama import Fore, Style
|
|
| 17 |
# Initialize OpenAI client (will be updated when API key is set)
|
| 18 |
client = None
|
| 19 |
|
| 20 |
-
# Store API
|
| 21 |
stored_api_key = ""
|
|
|
|
| 22 |
|
| 23 |
# Global variable to store the latest chat context
|
| 24 |
latest_blockly_chat_code = ""
|
|
@@ -70,16 +71,23 @@ async def update_chat(request: Request):
|
|
| 70 |
|
| 71 |
@app.post("/set_api_key_chat")
|
| 72 |
async def set_api_key_chat(request: Request):
|
| 73 |
-
"""Receive API
|
| 74 |
-
global stored_api_key
|
| 75 |
data = await request.json()
|
| 76 |
api_key = data.get("api_key", "").strip()
|
|
|
|
| 77 |
|
| 78 |
-
# Store in memory and set environment
|
| 79 |
-
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
-
print(f"[CHAT API KEY] Set OPENAI_API_KEY in chat.py environment")
|
| 83 |
return {"success": True}
|
| 84 |
|
| 85 |
def execute_mcp(mcp_call):
|
|
@@ -555,6 +563,111 @@ async def variable_result(request: Request):
|
|
| 555 |
|
| 556 |
return {"received": True}
|
| 557 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 558 |
def create_gradio_interface():
|
| 559 |
# Hardcoded system prompt
|
| 560 |
SYSTEM_PROMPT = f"""You are an AI assistant that helps users build **MCP servers** using Blockly blocks.
|
|
@@ -658,6 +771,19 @@ in one call.
|
|
| 658 |
You will be given the current variables that are in the workspace. Like the blocks, you will see:
|
| 659 |
|
| 660 |
`varId | varName`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 661 |
"""
|
| 662 |
|
| 663 |
tools = [
|
|
@@ -729,6 +855,23 @@ You will be given the current variables that are in the workspace. Like the bloc
|
|
| 729 |
},
|
| 730 |
}
|
| 731 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 732 |
]
|
| 733 |
|
| 734 |
def chat_with_context(message, history):
|
|
@@ -856,6 +999,12 @@ You will be given the current variables that are in the workspace. Like the bloc
|
|
| 856 |
tool_result = execute_mcp(mcp_call)
|
| 857 |
result_label = "MCP Execution Result"
|
| 858 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 859 |
if tool_result:
|
| 860 |
print(Fore.YELLOW + f"[TOOL RESULT] {tool_result}" + Style.RESET_ALL)
|
| 861 |
|
|
@@ -899,7 +1048,7 @@ You will be given the current variables that are in the workspace. Like the bloc
|
|
| 899 |
# Create the standard ChatInterface
|
| 900 |
demo = gr.ChatInterface(
|
| 901 |
fn=chat_with_context,
|
| 902 |
-
title="
|
| 903 |
)
|
| 904 |
|
| 905 |
return demo
|
|
|
|
| 17 |
# Initialize OpenAI client (will be updated when API key is set)
|
| 18 |
client = None
|
| 19 |
|
| 20 |
+
# Store API keys in memory for this process
|
| 21 |
stored_api_key = ""
|
| 22 |
+
stored_hf_key = ""
|
| 23 |
|
| 24 |
# Global variable to store the latest chat context
|
| 25 |
latest_blockly_chat_code = ""
|
|
|
|
| 71 |
|
| 72 |
@app.post("/set_api_key_chat")
|
| 73 |
async def set_api_key_chat(request: Request):
|
| 74 |
+
"""Receive API keys from frontend and store them"""
|
| 75 |
+
global stored_api_key, stored_hf_key
|
| 76 |
data = await request.json()
|
| 77 |
api_key = data.get("api_key", "").strip()
|
| 78 |
+
hf_key = data.get("hf_key", "").strip()
|
| 79 |
|
| 80 |
+
# Store in memory and set environment variables for this process
|
| 81 |
+
if api_key:
|
| 82 |
+
stored_api_key = api_key
|
| 83 |
+
os.environ["OPENAI_API_KEY"] = api_key
|
| 84 |
+
print(f"[CHAT API KEY] Set OPENAI_API_KEY in chat.py environment")
|
| 85 |
+
|
| 86 |
+
if hf_key:
|
| 87 |
+
stored_hf_key = hf_key
|
| 88 |
+
os.environ["HUGGINGFACE_API_KEY"] = hf_key
|
| 89 |
+
print(f"[CHAT HF KEY] Set HUGGINGFACE_API_KEY in chat.py environment")
|
| 90 |
|
|
|
|
| 91 |
return {"success": True}
|
| 92 |
|
| 93 |
def execute_mcp(mcp_call):
|
|
|
|
| 563 |
|
| 564 |
return {"received": True}
|
| 565 |
|
| 566 |
+
def deploy_to_huggingface(space_name):
|
| 567 |
+
"""Deploy the generated MCP code to a Hugging Face Space"""
|
| 568 |
+
global stored_hf_key
|
| 569 |
+
|
| 570 |
+
if not stored_hf_key:
|
| 571 |
+
return "[DEPLOY ERROR] No Hugging Face API key configured. Please set it in File > Keys."
|
| 572 |
+
|
| 573 |
+
try:
|
| 574 |
+
from huggingface_hub import HfApi
|
| 575 |
+
except ImportError:
|
| 576 |
+
return "[DEPLOY ERROR] huggingface_hub not installed. Run: pip install huggingface_hub"
|
| 577 |
+
|
| 578 |
+
try:
|
| 579 |
+
api = HfApi(token=stored_hf_key)
|
| 580 |
+
|
| 581 |
+
# Get username from token
|
| 582 |
+
user_info = api.whoami()
|
| 583 |
+
username = user_info["name"]
|
| 584 |
+
repo_id = f"{username}/{space_name}"
|
| 585 |
+
|
| 586 |
+
print(f"[DEPLOY] Creating HF Space: {repo_id}")
|
| 587 |
+
|
| 588 |
+
# Create the Space
|
| 589 |
+
api.create_repo(
|
| 590 |
+
repo_id=repo_id,
|
| 591 |
+
repo_type="space",
|
| 592 |
+
space_sdk="gradio",
|
| 593 |
+
private=False,
|
| 594 |
+
)
|
| 595 |
+
|
| 596 |
+
print(f"[DEPLOY] Space created. Uploading files...")
|
| 597 |
+
|
| 598 |
+
# Get the actual generated Python code from test.py (not the Blockly DSL)
|
| 599 |
+
python_code = ""
|
| 600 |
+
try:
|
| 601 |
+
resp = requests.get(f"http://127.0.0.1:{os.getenv('PORT', 8080)}/get_latest_code")
|
| 602 |
+
if resp.ok:
|
| 603 |
+
python_code = resp.json().get("code", "")
|
| 604 |
+
except Exception as e:
|
| 605 |
+
print(f"[DEPLOY WARN] Could not fetch Python code from test.py: {e}")
|
| 606 |
+
|
| 607 |
+
if not python_code.strip():
|
| 608 |
+
return "[DEPLOY ERROR] No generated Python code available. Create and test your tool first."
|
| 609 |
+
|
| 610 |
+
# Upload app.py with actual Python code
|
| 611 |
+
api.upload_file(
|
| 612 |
+
path_or_fileobj=python_code.encode(),
|
| 613 |
+
path_in_repo="app.py",
|
| 614 |
+
repo_id=repo_id,
|
| 615 |
+
repo_type="space",
|
| 616 |
+
)
|
| 617 |
+
|
| 618 |
+
# Create requirements.txt
|
| 619 |
+
requirements_content = """gradio
|
| 620 |
+
openai
|
| 621 |
+
requests
|
| 622 |
+
huggingface_hub
|
| 623 |
+
"""
|
| 624 |
+
|
| 625 |
+
api.upload_file(
|
| 626 |
+
path_or_fileobj=requirements_content.encode(),
|
| 627 |
+
path_in_repo="requirements.txt",
|
| 628 |
+
repo_id=repo_id,
|
| 629 |
+
repo_type="space",
|
| 630 |
+
)
|
| 631 |
+
|
| 632 |
+
# Create README.md with proper YAML front matter
|
| 633 |
+
readme_content = f"""---
|
| 634 |
+
title: {space_name.replace('-', ' ').title()}
|
| 635 |
+
emoji: 🚀
|
| 636 |
+
colorFrom: purple
|
| 637 |
+
colorTo: blue
|
| 638 |
+
sdk: gradio
|
| 639 |
+
app_file: app.py
|
| 640 |
+
pinned: false
|
| 641 |
+
---
|
| 642 |
+
|
| 643 |
+
# {space_name}
|
| 644 |
+
|
| 645 |
+
This is an MCP (Model Context Protocol) tool created with [Blockly MCP Builder](https://github.com/owenkaplinsky/mcp-blockly).
|
| 646 |
+
|
| 647 |
+
The tool has been automatically deployed to Hugging Face Spaces and is ready to use!
|
| 648 |
+
|
| 649 |
+
## About
|
| 650 |
+
Created using Blockly MCP Builder - a visual programming environment for building AI tools.
|
| 651 |
+
"""
|
| 652 |
+
|
| 653 |
+
api.upload_file(
|
| 654 |
+
path_or_fileobj=readme_content.encode("utf-8"),
|
| 655 |
+
path_in_repo="README.md",
|
| 656 |
+
repo_id=repo_id,
|
| 657 |
+
repo_type="space",
|
| 658 |
+
)
|
| 659 |
+
|
| 660 |
+
space_url = f"https://huggingface.co/spaces/{repo_id}"
|
| 661 |
+
print(f"[DEPLOY SUCCESS] Space deployed: {space_url}")
|
| 662 |
+
|
| 663 |
+
return f"[TOOL] Successfully deployed to Hugging Face Space!\n\n**Space URL:** {space_url}"
|
| 664 |
+
|
| 665 |
+
except Exception as e:
|
| 666 |
+
print(f"[DEPLOY ERROR] {e}")
|
| 667 |
+
import traceback
|
| 668 |
+
traceback.print_exc()
|
| 669 |
+
return f"[DEPLOY ERROR] Failed to deploy: {str(e)}"
|
| 670 |
+
|
| 671 |
def create_gradio_interface():
|
| 672 |
# Hardcoded system prompt
|
| 673 |
SYSTEM_PROMPT = f"""You are an AI assistant that helps users build **MCP servers** using Blockly blocks.
|
|
|
|
| 771 |
You will be given the current variables that are in the workspace. Like the blocks, you will see:
|
| 772 |
|
| 773 |
`varId | varName`
|
| 774 |
+
|
| 775 |
+
---
|
| 776 |
+
|
| 777 |
+
### Deploying to Hugging Face Spaces
|
| 778 |
+
|
| 779 |
+
Once the user has tested and is happy with their MCP tool, you can deploy it to a live Hugging Face Space using the `deploy_to_huggingface` tool.
|
| 780 |
+
|
| 781 |
+
**To deploy:**
|
| 782 |
+
1. Ask the user for a name for their Space (e.g., "my-tool")
|
| 783 |
+
2. Call the `deploy_to_huggingface` tool with that name
|
| 784 |
+
3. The tool will create a new Space, upload the code, and return a live URL
|
| 785 |
+
|
| 786 |
+
The deployed Space will be public and shareable with others.
|
| 787 |
"""
|
| 788 |
|
| 789 |
tools = [
|
|
|
|
| 855 |
},
|
| 856 |
}
|
| 857 |
},
|
| 858 |
+
{
|
| 859 |
+
"type": "function",
|
| 860 |
+
"function": {
|
| 861 |
+
"name": "deploy_to_huggingface",
|
| 862 |
+
"description": "Deploy the generated MCP tool to a Hugging Face Space. Requires a Hugging Face API key to be set.",
|
| 863 |
+
"parameters": {
|
| 864 |
+
"type": "object",
|
| 865 |
+
"properties": {
|
| 866 |
+
"space_name": {
|
| 867 |
+
"type": "string",
|
| 868 |
+
"description": "The name of the Hugging Face Space to create (e.g., 'my-tool')",
|
| 869 |
+
},
|
| 870 |
+
},
|
| 871 |
+
"required": ["space_name"],
|
| 872 |
+
},
|
| 873 |
+
}
|
| 874 |
+
},
|
| 875 |
]
|
| 876 |
|
| 877 |
def chat_with_context(message, history):
|
|
|
|
| 999 |
tool_result = execute_mcp(mcp_call)
|
| 1000 |
result_label = "MCP Execution Result"
|
| 1001 |
|
| 1002 |
+
elif function_name == "deploy_to_huggingface":
|
| 1003 |
+
space_name = function_args.get("space_name", "")
|
| 1004 |
+
print(Fore.YELLOW + f"Agent deploying to Hugging Face Space: `{space_name}`." + Style.RESET_ALL)
|
| 1005 |
+
tool_result = deploy_to_huggingface(space_name)
|
| 1006 |
+
result_label = "Deployment Result"
|
| 1007 |
+
|
| 1008 |
if tool_result:
|
| 1009 |
print(Fore.YELLOW + f"[TOOL RESULT] {tool_result}" + Style.RESET_ALL)
|
| 1010 |
|
|
|
|
| 1048 |
# Create the standard ChatInterface
|
| 1049 |
demo = gr.ChatInterface(
|
| 1050 |
fn=chat_with_context,
|
| 1051 |
+
title="AI Assistant",
|
| 1052 |
)
|
| 1053 |
|
| 1054 |
return demo
|
project/dist/bundle.js
DELETED
|
The diff for this file is too large to render.
See raw diff
|
|
|
project/dist/bundle.js.LICENSE.txt
DELETED
|
@@ -1,5 +0,0 @@
|
|
| 1 |
-
/**
|
| 2 |
-
* @license
|
| 3 |
-
* Copyright 2019 Google LLC
|
| 4 |
-
* SPDX-License-Identifier: Apache-2.0
|
| 5 |
-
*/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
project/dist/bundle.js.map
DELETED
|
The diff for this file is too large to render.
See raw diff
|
|
|
project/dist/index.html
DELETED
|
@@ -1,114 +0,0 @@
|
|
| 1 |
-
<!doctype html><html><head><meta charset="utf-8"/><title>Blockly MCP Builder</title><script defer="defer" src="bundle.js"></script></head><body><div id="topBar"><div id="titleSection"><h1>Blockly MCP Builder</h1></div><div id="divider"></div><div id="menuSection"><div class="menuGroup"><button class="menuButton">File</button><div class="dropdown"><a href="#" id="newButton" class="dropdownItem" data-action="new">New</a> <a href="#" id="loadButton" class="dropdownItem" data-action="open">Open</a> <a href="#" id="saveButton" class="dropdownItem" data-action="download">Download Project</a> <a href="#" id="downloadCodeButton" class="dropdownItem" data-action="downloadCode">Download Code</a> <a href="#" id="settingsButton" class="dropdownItem" data-action="downloadCode">API Key</a></div></div><div class="menuGroup"><button class="menuButton">Edit</button><div class="dropdown"><a href="#" id="undoButton" class="dropdownItem" data-action="undo">Undo</a> <a href="#" id="redoButton" class="dropdownItem" data-action="redo">Redo</a> <a href="#" id="cleanWorkspace" class="dropdownItem" data-action="cleanup">Clean up</a></div></div><div class="menuGroup"><button class="menuButton">Examples</button><div class="dropdown"><a href="#" id="weatherButton" class="dropdownItem" data-action="undo">Weather API</a> <a href="#" id="factButton" class="dropdownItem" data-action="undo">Fact Checker</a></div></div></div><div id="githubLink"><a href="https://github.com/owenkaplinsky/mcp-blockly" target="_blank" rel="noopener noreferrer"><svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="currentColor"><path d="M12 0c-6.626 0-12 5.373-12 12 0 5.302 3.438 9.8 8.207 11.387.599.111.793-.261.793-.577v-2.234c-3.338.726-4.033-1.416-4.033-1.416-.546-1.387-1.333-1.756-1.333-1.756-1.089-.745.083-.729.083-.729 1.205.084 1.839 1.237 1.839 1.237 1.07 1.834 2.807 1.304 3.492.997.107-.775.418-1.305.762-1.604-2.665-.305-5.467-1.334-5.467-5.931 0-1.311.469-2.381 1.236-3.221-.124-.303-.535-1.524.117-3.176 0 0 1.008-.322 3.301 1.23.957-.266 1.983-.399 3.003-.404 1.02.005 2.047.138 3.006.404 2.291-1.552 3.297-1.23 3.297-1.23.653 1.653.242 2.874.118 3.176.77.84 1.235 1.911 1.235 3.221 0 4.609-2.807 5.624-5.479 5.921.43.372.823 1.102.823 2.222v 3.293c0 .319.192.694.801.576 4.765-1.589 8.199-6.086 8.199-11.386 0-6.627-5.373-12-12-12z"/></svg></a></div></div><div id="pageContainer"><div id="outputPane"><div id="tabBar"><div class="tab active" data-tab="development">Development</div><div class="tab" data-tab="aichat">AI Chat</div></div><div id="developmentTab" class="tabContent active"><div id="chatContainer"><iframe src="/gradio-test" style="width: 100%; height: 100%; border: none;"></iframe></div><div class="verticalResizer"></div><pre id="generatedCode"><code></code></pre></div><div id="aichatTab" class="tabContent"><div id="gradioContainer"><iframe src="/gradio-chat" style="width: 100%; height: 100%; border: none;"></iframe></div><pre id="aichatCode" style="position: absolute; left: -9999px; width: 1px; height: 1px;"><code></code></pre></div></div><div class="resizer"></div><div id="blocklyDiv"></div></div><script>// Tab switching functionality
|
| 2 |
-
const tabs = document.querySelectorAll('.tab');
|
| 3 |
-
const tabContents = document.querySelectorAll('.tabContent');
|
| 4 |
-
|
| 5 |
-
tabs.forEach(tab => {
|
| 6 |
-
tab.addEventListener('click', () => {
|
| 7 |
-
const tabName = tab.getAttribute('data-tab');
|
| 8 |
-
|
| 9 |
-
// Remove active class from all tabs and contents
|
| 10 |
-
tabs.forEach(t => t.classList.remove('active'));
|
| 11 |
-
tabContents.forEach(content => content.classList.remove('active'));
|
| 12 |
-
|
| 13 |
-
// Add active class to clicked tab and corresponding content
|
| 14 |
-
tab.classList.add('active');
|
| 15 |
-
document.getElementById(tabName + 'Tab').classList.add('active');
|
| 16 |
-
});
|
| 17 |
-
});
|
| 18 |
-
|
| 19 |
-
// Horizontal resizer (output pane vs blockly)
|
| 20 |
-
const resizer = document.querySelector('.resizer');
|
| 21 |
-
const outputPane = document.getElementById('outputPane');
|
| 22 |
-
const pageContainer = document.getElementById('pageContainer');
|
| 23 |
-
|
| 24 |
-
let startX = 0;
|
| 25 |
-
let startWidth = 0;
|
| 26 |
-
|
| 27 |
-
function onPointerMove(e) {
|
| 28 |
-
const containerRect = pageContainer.getBoundingClientRect();
|
| 29 |
-
const containerWidth = containerRect.width;
|
| 30 |
-
|
| 31 |
-
const dx = e.clientX - startX;
|
| 32 |
-
let newWidthPx = startWidth + dx;
|
| 33 |
-
|
| 34 |
-
const minWidth = containerWidth * 0.2;
|
| 35 |
-
const maxWidth = containerWidth * 0.59;
|
| 36 |
-
newWidthPx = Math.max(minWidth, Math.min(maxWidth, newWidthPx));
|
| 37 |
-
|
| 38 |
-
const newPercent = (newWidthPx / containerWidth) * 100;
|
| 39 |
-
outputPane.style.flex = `0 0 ${newPercent}%`;
|
| 40 |
-
}
|
| 41 |
-
|
| 42 |
-
function onPointerUp() {
|
| 43 |
-
resizer.releasePointerCapture(activePointerId);
|
| 44 |
-
resizer.removeEventListener('pointermove', onPointerMove);
|
| 45 |
-
resizer.removeEventListener('pointerup', onPointerUp);
|
| 46 |
-
resizer.classList.remove('active');
|
| 47 |
-
document.body.style.cursor = '';
|
| 48 |
-
document.body.style.userSelect = '';
|
| 49 |
-
}
|
| 50 |
-
|
| 51 |
-
let activePointerId = null;
|
| 52 |
-
|
| 53 |
-
resizer.addEventListener('pointerdown', (e) => {
|
| 54 |
-
const rect = outputPane.getBoundingClientRect();
|
| 55 |
-
startX = e.clientX;
|
| 56 |
-
startWidth = rect.width;
|
| 57 |
-
activePointerId = e.pointerId;
|
| 58 |
-
|
| 59 |
-
resizer.classList.add('active');
|
| 60 |
-
document.body.style.cursor = 'col-resize';
|
| 61 |
-
document.body.style.userSelect = 'none';
|
| 62 |
-
|
| 63 |
-
resizer.setPointerCapture(activePointerId);
|
| 64 |
-
resizer.addEventListener('pointermove', onPointerMove);
|
| 65 |
-
resizer.addEventListener('pointerup', onPointerUp);
|
| 66 |
-
});
|
| 67 |
-
|
| 68 |
-
// Vertical resizer (gradio vs code)
|
| 69 |
-
const verticalResizer = document.querySelector('.verticalResizer');
|
| 70 |
-
const chatContainer = document.getElementById('chatContainer');
|
| 71 |
-
const generatedCode = document.getElementById('generatedCode');
|
| 72 |
-
|
| 73 |
-
let startY = 0;
|
| 74 |
-
let startHeight = 0;
|
| 75 |
-
let activePointerId2 = null;
|
| 76 |
-
|
| 77 |
-
function onVerticalPointerMove(e) {
|
| 78 |
-
const outputPaneRect = outputPane.getBoundingClientRect();
|
| 79 |
-
const outputPaneHeight = outputPaneRect.height;
|
| 80 |
-
|
| 81 |
-
const dy = e.clientY - startY;
|
| 82 |
-
let newHeightPx = startHeight + dy;
|
| 83 |
-
|
| 84 |
-
const minHeight = outputPaneHeight * 0.4;
|
| 85 |
-
const maxHeight = outputPaneHeight * 0.78;
|
| 86 |
-
newHeightPx = Math.max(minHeight, Math.min(maxHeight, newHeightPx));
|
| 87 |
-
|
| 88 |
-
const newPercent = (newHeightPx / outputPaneHeight) * 100;
|
| 89 |
-
chatContainer.style.flex = `0 0 ${newPercent}%`;
|
| 90 |
-
}
|
| 91 |
-
|
| 92 |
-
function onVerticalPointerUp() {
|
| 93 |
-
verticalResizer.releasePointerCapture(activePointerId2);
|
| 94 |
-
verticalResizer.removeEventListener('pointermove', onVerticalPointerMove);
|
| 95 |
-
verticalResizer.removeEventListener('pointerup', onVerticalPointerUp);
|
| 96 |
-
verticalResizer.classList.remove('active');
|
| 97 |
-
document.body.style.cursor = '';
|
| 98 |
-
document.body.style.userSelect = '';
|
| 99 |
-
}
|
| 100 |
-
|
| 101 |
-
verticalResizer.addEventListener('pointerdown', (e) => {
|
| 102 |
-
const rect = chatContainer.getBoundingClientRect();
|
| 103 |
-
startY = e.clientY;
|
| 104 |
-
startHeight = rect.height;
|
| 105 |
-
activePointerId2 = e.pointerId;
|
| 106 |
-
|
| 107 |
-
verticalResizer.classList.add('active');
|
| 108 |
-
document.body.style.cursor = 'row-resize';
|
| 109 |
-
document.body.style.userSelect = 'none';
|
| 110 |
-
|
| 111 |
-
verticalResizer.setPointerCapture(activePointerId2);
|
| 112 |
-
verticalResizer.addEventListener('pointermove', onVerticalPointerMove);
|
| 113 |
-
verticalResizer.addEventListener('pointerup', onVerticalPointerUp);
|
| 114 |
-
});</script><div id="apiKeyModal" style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 9999; align-items: center; justify-content: center;"><div style="background: white; padding: 30px; border-radius: 10px; width: 90%; max-width: 500px; box-shadow: 0 10px 30px rgba(0,0,0,0.2);"><h2 style="margin-top: 0; margin-bottom: 20px; color: #333;">Settings</h2><label for="apiKeyInput" style="display: block; margin-bottom: 10px; color: #666; font-size: 14px;">OpenAI API Key:</label> <input type="password" id="apiKeyInput" style="width: 100%; padding: 10px; border: 1px solid #ddd; border-radius: 5px; font-size: 14px; box-sizing: border-box;" placeholder="sk-..."><p style="margin-top: 10px; color: #999; font-size: 12px;">Your API key will be stored securely for this session.</p><div style="margin-top: 20px; display: flex; justify-content: flex-end; gap: 10px;"><button id="cancelApiKey" style="padding: 10px 20px; background: #e5e7eb; border: none; border-radius: 5px; cursor: pointer; font-size: 14px;">Cancel</button> <button id="saveApiKey" style="padding: 10px 20px; background: #6366f1; color: white; border: none; border-radius: 5px; cursor: pointer; font-size: 14px;">Save</button></div></div></div></body></html>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
project/src/generators/chat.js
CHANGED
|
@@ -28,7 +28,7 @@ export const forBlock = Object.create(null);
|
|
| 28 |
/*
|
| 29 |
|
| 30 |
This file is for the secondary code generator.
|
| 31 |
-
It is not meant to be shown to the user, but rather to communicate the state of the workspace to an AI
|
| 32 |
|
| 33 |
*/
|
| 34 |
|
|
|
|
| 28 |
/*
|
| 29 |
|
| 30 |
This file is for the secondary code generator.
|
| 31 |
+
It is not meant to be shown to the user, but rather to communicate the state of the workspace to an AI Assistant assistant in a simplistic text format.
|
| 32 |
|
| 33 |
*/
|
| 34 |
|
project/src/index.html
CHANGED
|
@@ -20,7 +20,7 @@
|
|
| 20 |
<a href="#" id="loadButton" class="dropdownItem" data-action="open">Open</a>
|
| 21 |
<a href="#" id="saveButton" class="dropdownItem" data-action="download">Download Project</a>
|
| 22 |
<a href="#" id="downloadCodeButton" class="dropdownItem" data-action="downloadCode">Download Code</a>
|
| 23 |
-
<a href="#" id="settingsButton" class="dropdownItem" data-action="downloadCode">API
|
| 24 |
</div>
|
| 25 |
</div>
|
| 26 |
|
|
@@ -54,8 +54,8 @@
|
|
| 54 |
<div id="pageContainer">
|
| 55 |
<div id="outputPane">
|
| 56 |
<div id="tabBar">
|
| 57 |
-
<div class="tab active" data-tab="development">
|
| 58 |
-
<div class="tab" data-tab="aichat">AI
|
| 59 |
</div>
|
| 60 |
<div id="developmentTab" class="tabContent active">
|
| 61 |
<div id="chatContainer">
|
|
@@ -192,13 +192,20 @@
|
|
| 192 |
});
|
| 193 |
</script>
|
| 194 |
|
| 195 |
-
<!--
|
| 196 |
<div id="apiKeyModal" style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 9999; align-items: center; justify-content: center;">
|
| 197 |
<div style="background: white; padding: 30px; border-radius: 10px; width: 90%; max-width: 500px; box-shadow: 0 10px 30px rgba(0,0,0,0.2);">
|
| 198 |
-
<h2 style="margin-top: 0; margin-bottom: 20px; color: #333;">
|
| 199 |
-
|
| 200 |
-
<
|
| 201 |
-
<
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 202 |
<div style="margin-top: 20px; display: flex; justify-content: flex-end; gap: 10px;">
|
| 203 |
<button id="cancelApiKey" style="padding: 10px 20px; background: #e5e7eb; border: none; border-radius: 5px; cursor: pointer; font-size: 14px;">Cancel</button>
|
| 204 |
<button id="saveApiKey" style="padding: 10px 20px; background: #6366f1; color: white; border: none; border-radius: 5px; cursor: pointer; font-size: 14px;">Save</button>
|
|
|
|
| 20 |
<a href="#" id="loadButton" class="dropdownItem" data-action="open">Open</a>
|
| 21 |
<a href="#" id="saveButton" class="dropdownItem" data-action="download">Download Project</a>
|
| 22 |
<a href="#" id="downloadCodeButton" class="dropdownItem" data-action="downloadCode">Download Code</a>
|
| 23 |
+
<a href="#" id="settingsButton" class="dropdownItem" data-action="downloadCode">API Keys</a>
|
| 24 |
</div>
|
| 25 |
</div>
|
| 26 |
|
|
|
|
| 54 |
<div id="pageContainer">
|
| 55 |
<div id="outputPane">
|
| 56 |
<div id="tabBar">
|
| 57 |
+
<div class="tab active" data-tab="development">Testing</div>
|
| 58 |
+
<div class="tab" data-tab="aichat">AI Assistant</div>
|
| 59 |
</div>
|
| 60 |
<div id="developmentTab" class="tabContent active">
|
| 61 |
<div id="chatContainer">
|
|
|
|
| 192 |
});
|
| 193 |
</script>
|
| 194 |
|
| 195 |
+
<!-- Keys Modal -->
|
| 196 |
<div id="apiKeyModal" style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 9999; align-items: center; justify-content: center;">
|
| 197 |
<div style="background: white; padding: 30px; border-radius: 10px; width: 90%; max-width: 500px; box-shadow: 0 10px 30px rgba(0,0,0,0.2);">
|
| 198 |
+
<h2 style="margin-top: 0; margin-bottom: 20px; color: #333;">API Keys</h2>
|
| 199 |
+
|
| 200 |
+
<label for="apiKeyInput" style="display: block; margin-bottom: 10px; color: #666; font-size: 14px; font-weight: 500;">OpenAI API Key:</label>
|
| 201 |
+
<input type="password" id="apiKeyInput" style="width: 100%; padding: 10px; border: 1px solid #ddd; border-radius: 5px; font-size: 14px; box-sizing: border-box; margin-bottom: 5px;" placeholder="sk-...">
|
| 202 |
+
<p style="margin: 5px 0 15px 0; color: #999; font-size: 12px;">For the AI assistant and blocks' model calls.</p>
|
| 203 |
+
|
| 204 |
+
<label for="hfKeyInput" style="display: block; margin-bottom: 10px; color: #666; font-size: 14px; font-weight: 500;">Hugging Face API Key:</label>
|
| 205 |
+
<input type="password" id="hfKeyInput" style="width: 100%; padding: 10px; border: 1px solid #ddd; border-radius: 5px; font-size: 14px; box-sizing: border-box; margin-bottom: 5px;" placeholder="hf_...">
|
| 206 |
+
<p style="margin: 5px 0 20px 0; color: #999; font-size: 12px;">For deploying your MCP server.</p>
|
| 207 |
+
|
| 208 |
+
<p style="color: #999; font-size: 12px;">Your API keys will be stored securely for this session.</p>
|
| 209 |
<div style="margin-top: 20px; display: flex; justify-content: flex-end; gap: 10px;">
|
| 210 |
<button id="cancelApiKey" style="padding: 10px 20px; background: #e5e7eb; border: none; border-radius: 5px; cursor: pointer; font-size: 14px;">Cancel</button>
|
| 211 |
<button id="saveApiKey" style="padding: 10px 20px; background: #6366f1; color: white; border: none; border-radius: 5px; cursor: pointer; font-size: 14px;">Save</button>
|
project/src/index.js
CHANGED
|
@@ -149,63 +149,72 @@ downloadCodeButton.addEventListener("click", () => {
|
|
| 149 |
document.body.removeChild(element);
|
| 150 |
});
|
| 151 |
|
| 152 |
-
// Settings button and
|
| 153 |
const settingsButton = document.querySelector('#settingsButton');
|
| 154 |
const apiKeyModal = document.querySelector('#apiKeyModal');
|
| 155 |
const apiKeyInput = document.querySelector('#apiKeyInput');
|
|
|
|
| 156 |
const saveApiKeyButton = document.querySelector('#saveApiKey');
|
| 157 |
const cancelApiKeyButton = document.querySelector('#cancelApiKey');
|
| 158 |
|
| 159 |
settingsButton.addEventListener("click", () => {
|
| 160 |
apiKeyModal.style.display = 'flex';
|
| 161 |
|
| 162 |
-
// Load current API
|
| 163 |
fetch("/get_api_key", {
|
| 164 |
method: "GET",
|
| 165 |
})
|
| 166 |
.then(response => response.json())
|
| 167 |
.then(data => {
|
| 168 |
apiKeyInput.value = data.api_key || '';
|
|
|
|
| 169 |
})
|
| 170 |
.catch(err => {
|
| 171 |
-
console.error("Error loading API
|
| 172 |
});
|
| 173 |
});
|
| 174 |
|
| 175 |
saveApiKeyButton.addEventListener("click", () => {
|
| 176 |
const apiKey = apiKeyInput.value.trim();
|
|
|
|
| 177 |
|
| 178 |
-
// Validate OpenAI key format
|
| 179 |
-
if (!apiKey.startsWith("sk-") || apiKey.length < 40) {
|
| 180 |
-
alert("Invalid API key format. Please enter a valid OpenAI API key.");
|
| 181 |
return;
|
| 182 |
}
|
| 183 |
|
| 184 |
-
//
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 185 |
Promise.all([
|
| 186 |
fetch("/set_api_key", {
|
| 187 |
method: "POST",
|
| 188 |
headers: { "Content-Type": "application/json" },
|
| 189 |
-
body: JSON.stringify({ api_key: apiKey }),
|
| 190 |
}),
|
| 191 |
fetch("/set_api_key_chat", {
|
| 192 |
method: "POST",
|
| 193 |
headers: { "Content-Type": "application/json" },
|
| 194 |
-
body: JSON.stringify({ api_key: apiKey }),
|
| 195 |
})
|
| 196 |
])
|
| 197 |
.then(async (responses) => {
|
| 198 |
const results = await Promise.all(responses.map(r => r.json()));
|
| 199 |
if (results.every(r => r.success)) {
|
| 200 |
-
alert('API
|
| 201 |
apiKeyModal.style.display = 'none';
|
| 202 |
} else {
|
| 203 |
-
alert('Failed to save API
|
| 204 |
}
|
| 205 |
})
|
| 206 |
.catch(err => {
|
| 207 |
-
console.error("Error saving API
|
| 208 |
-
alert('Failed to save API
|
| 209 |
});
|
| 210 |
});
|
| 211 |
|
|
@@ -1023,7 +1032,7 @@ const sendChatUpdate = async (chatCode, retryCount = 0) => {
|
|
| 1023 |
}
|
| 1024 |
};
|
| 1025 |
|
| 1026 |
-
// Update function for the Chat generator (AI
|
| 1027 |
const updateChatCode = () => {
|
| 1028 |
globalChatCode = chatGenerator.workspaceToCode(ws);
|
| 1029 |
const codeEl = document.querySelector('#aichatCode code');
|
|
|
|
| 149 |
document.body.removeChild(element);
|
| 150 |
});
|
| 151 |
|
| 152 |
+
// Settings button and Keys Modal
|
| 153 |
const settingsButton = document.querySelector('#settingsButton');
|
| 154 |
const apiKeyModal = document.querySelector('#apiKeyModal');
|
| 155 |
const apiKeyInput = document.querySelector('#apiKeyInput');
|
| 156 |
+
const hfKeyInput = document.querySelector('#hfKeyInput');
|
| 157 |
const saveApiKeyButton = document.querySelector('#saveApiKey');
|
| 158 |
const cancelApiKeyButton = document.querySelector('#cancelApiKey');
|
| 159 |
|
| 160 |
settingsButton.addEventListener("click", () => {
|
| 161 |
apiKeyModal.style.display = 'flex';
|
| 162 |
|
| 163 |
+
// Load current API keys from backend
|
| 164 |
fetch("/get_api_key", {
|
| 165 |
method: "GET",
|
| 166 |
})
|
| 167 |
.then(response => response.json())
|
| 168 |
.then(data => {
|
| 169 |
apiKeyInput.value = data.api_key || '';
|
| 170 |
+
hfKeyInput.value = data.hf_key || '';
|
| 171 |
})
|
| 172 |
.catch(err => {
|
| 173 |
+
console.error("Error loading API keys:", err);
|
| 174 |
});
|
| 175 |
});
|
| 176 |
|
| 177 |
saveApiKeyButton.addEventListener("click", () => {
|
| 178 |
const apiKey = apiKeyInput.value.trim();
|
| 179 |
+
const hfKey = hfKeyInput.value.trim();
|
| 180 |
|
| 181 |
+
// Validate OpenAI key format if provided
|
| 182 |
+
if (apiKey && (!apiKey.startsWith("sk-") || apiKey.length < 40)) {
|
| 183 |
+
alert("Invalid OpenAI API key format. Please enter a valid OpenAI API key (starts with 'sk-').");
|
| 184 |
return;
|
| 185 |
}
|
| 186 |
|
| 187 |
+
// Validate Hugging Face key format if provided
|
| 188 |
+
if (hfKey && (!hfKey.startsWith("hf_") || hfKey.length < 20)) {
|
| 189 |
+
alert("Invalid Hugging Face API key format. Please enter a valid Hugging Face API key (starts with 'hf_').");
|
| 190 |
+
return;
|
| 191 |
+
}
|
| 192 |
+
|
| 193 |
+
// Save API keys to both backend servers (test.py and chat.py)
|
| 194 |
Promise.all([
|
| 195 |
fetch("/set_api_key", {
|
| 196 |
method: "POST",
|
| 197 |
headers: { "Content-Type": "application/json" },
|
| 198 |
+
body: JSON.stringify({ api_key: apiKey, hf_key: hfKey }),
|
| 199 |
}),
|
| 200 |
fetch("/set_api_key_chat", {
|
| 201 |
method: "POST",
|
| 202 |
headers: { "Content-Type": "application/json" },
|
| 203 |
+
body: JSON.stringify({ api_key: apiKey, hf_key: hfKey }),
|
| 204 |
})
|
| 205 |
])
|
| 206 |
.then(async (responses) => {
|
| 207 |
const results = await Promise.all(responses.map(r => r.json()));
|
| 208 |
if (results.every(r => r.success)) {
|
| 209 |
+
alert('API keys saved successfully');
|
| 210 |
apiKeyModal.style.display = 'none';
|
| 211 |
} else {
|
| 212 |
+
alert('Failed to save API keys to all services');
|
| 213 |
}
|
| 214 |
})
|
| 215 |
.catch(err => {
|
| 216 |
+
console.error("Error saving API keys:", err);
|
| 217 |
+
alert('Failed to save API keys');
|
| 218 |
});
|
| 219 |
});
|
| 220 |
|
|
|
|
| 1032 |
}
|
| 1033 |
};
|
| 1034 |
|
| 1035 |
+
// Update function for the Chat generator (AI Assistant tab)
|
| 1036 |
const updateChatCode = () => {
|
| 1037 |
globalChatCode = chatGenerator.workspaceToCode(ws);
|
| 1038 |
const codeEl = document.querySelector('#aichatCode code');
|
project/test.py
CHANGED
|
@@ -16,7 +16,8 @@ app.add_middleware(
|
|
| 16 |
)
|
| 17 |
|
| 18 |
latest_blockly_code = ""
|
| 19 |
-
stored_api_key = "" # Store the API key in memory
|
|
|
|
| 20 |
|
| 21 |
|
| 22 |
# Gets REAL Python code, not the LLM DSL
|
|
@@ -37,34 +38,47 @@ async def get_latest_code():
|
|
| 37 |
|
| 38 |
@app.get("/get_api_key")
|
| 39 |
async def get_api_key_endpoint():
|
| 40 |
-
"""Get the current API
|
| 41 |
-
global stored_api_key
|
| 42 |
api_key = stored_api_key or os.environ.get("OPENAI_API_KEY", "")
|
|
|
|
| 43 |
|
| 44 |
-
# Mask the API
|
| 45 |
if api_key and len(api_key) > 15:
|
| 46 |
-
|
| 47 |
else:
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
@app.post("/set_api_key")
|
| 53 |
async def set_api_key_endpoint(request: Request):
|
| 54 |
-
"""Save API
|
| 55 |
-
global stored_api_key
|
| 56 |
data = await request.json()
|
| 57 |
api_key = data.get("api_key", "").strip()
|
|
|
|
| 58 |
|
| 59 |
try:
|
| 60 |
-
# Store in memory and set environment
|
| 61 |
-
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
-
print(f"[API KEY] Set OPENAI_API_KEY in environment")
|
| 65 |
return {"success": True}
|
| 66 |
except Exception as e:
|
| 67 |
-
print(f"Error setting API
|
| 68 |
return {"success": False, "error": str(e)}
|
| 69 |
|
| 70 |
|
|
@@ -173,7 +187,7 @@ def execute_blockly_logic(user_inputs):
|
|
| 173 |
|
| 174 |
|
| 175 |
def build_interface():
|
| 176 |
-
with gr.Blocks() as demo:
|
| 177 |
# Create a fixed number of potential input fields (max 10)
|
| 178 |
input_fields = []
|
| 179 |
input_labels = []
|
|
@@ -194,7 +208,7 @@ def build_interface():
|
|
| 194 |
output_fields.append(out)
|
| 195 |
|
| 196 |
with gr.Row():
|
| 197 |
-
submit_btn = gr.Button("
|
| 198 |
refresh_btn = gr.Button("Refresh")
|
| 199 |
|
| 200 |
def refresh_inputs():
|
|
|
|
| 16 |
)
|
| 17 |
|
| 18 |
latest_blockly_code = ""
|
| 19 |
+
stored_api_key = "" # Store the OpenAI API key in memory
|
| 20 |
+
stored_hf_key = "" # Store the Hugging Face API key in memory
|
| 21 |
|
| 22 |
|
| 23 |
# Gets REAL Python code, not the LLM DSL
|
|
|
|
| 38 |
|
| 39 |
@app.get("/get_api_key")
|
| 40 |
async def get_api_key_endpoint():
|
| 41 |
+
"""Get the current API keys from memory"""
|
| 42 |
+
global stored_api_key, stored_hf_key
|
| 43 |
api_key = stored_api_key or os.environ.get("OPENAI_API_KEY", "")
|
| 44 |
+
hf_key = stored_hf_key or os.environ.get("HUGGINGFACE_API_KEY", "")
|
| 45 |
|
| 46 |
+
# Mask the API keys for security (show only first 7 and last 4 characters)
|
| 47 |
if api_key and len(api_key) > 15:
|
| 48 |
+
masked_api_key = api_key[:7] + '...' + api_key[-4:]
|
| 49 |
else:
|
| 50 |
+
masked_api_key = api_key if api_key else ""
|
| 51 |
|
| 52 |
+
if hf_key and len(hf_key) > 15:
|
| 53 |
+
masked_hf_key = hf_key[:7] + '...' + hf_key[-4:]
|
| 54 |
+
else:
|
| 55 |
+
masked_hf_key = hf_key if hf_key else ""
|
| 56 |
+
|
| 57 |
+
return {"api_key": masked_api_key, "hf_key": masked_hf_key}
|
| 58 |
|
| 59 |
@app.post("/set_api_key")
|
| 60 |
async def set_api_key_endpoint(request: Request):
|
| 61 |
+
"""Save API keys to environment variables"""
|
| 62 |
+
global stored_api_key, stored_hf_key
|
| 63 |
data = await request.json()
|
| 64 |
api_key = data.get("api_key", "").strip()
|
| 65 |
+
hf_key = data.get("hf_key", "").strip()
|
| 66 |
|
| 67 |
try:
|
| 68 |
+
# Store in memory and set environment variables
|
| 69 |
+
if api_key:
|
| 70 |
+
stored_api_key = api_key
|
| 71 |
+
os.environ["OPENAI_API_KEY"] = api_key
|
| 72 |
+
print(f"[API KEY] Set OPENAI_API_KEY in environment")
|
| 73 |
+
|
| 74 |
+
if hf_key:
|
| 75 |
+
stored_hf_key = hf_key
|
| 76 |
+
os.environ["HUGGINGFACE_API_KEY"] = hf_key
|
| 77 |
+
print(f"[HF KEY] Set HUGGINGFACE_API_KEY in environment")
|
| 78 |
|
|
|
|
| 79 |
return {"success": True}
|
| 80 |
except Exception as e:
|
| 81 |
+
print(f"Error setting API keys: {e}")
|
| 82 |
return {"success": False, "error": str(e)}
|
| 83 |
|
| 84 |
|
|
|
|
| 187 |
|
| 188 |
|
| 189 |
def build_interface():
|
| 190 |
+
with gr.Blocks(title="Test MCP Server") as demo:
|
| 191 |
# Create a fixed number of potential input fields (max 10)
|
| 192 |
input_fields = []
|
| 193 |
input_labels = []
|
|
|
|
| 208 |
output_fields.append(out)
|
| 209 |
|
| 210 |
with gr.Row():
|
| 211 |
+
submit_btn = gr.Button("Test")
|
| 212 |
refresh_btn = gr.Button("Refresh")
|
| 213 |
|
| 214 |
def refresh_inputs():
|
requirements.txt
CHANGED
|
@@ -3,3 +3,6 @@ dotenv
|
|
| 3 |
openai
|
| 4 |
requests
|
| 5 |
colorama
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
openai
|
| 4 |
requests
|
| 5 |
colorama
|
| 6 |
+
huggingface_hub
|
| 7 |
+
gradio_client
|
| 8 |
+
mcp
|