Model not working with MCP client
The model description sounds perfect for my local MCP client + LLM use case but the model just hasn't been working.
I used AnythingLLM MCP client and imported the GGUF of this model, added a custom Shopify MCP server and used this prompt:
"@agent Give me a count of orders in my Shopify store where the amount is more than 1000 then give me a percentage of them that were repeat purchases"
The model seemed to use the MCP servers as the agent was invoked and spent about 3 minutes with it and then ultimately gave this result:
Then I tried a few variations of the prompt as well, going more and more explicit about the task:
"@agent use the Shopify MCP server tool and give me a count of orders in my Shopify store where the amount is more than 1000 then give me a percentage of them that were repeat purchases"
"@agent first get me a list of all orders in my Shopify store using the Shopify MCP server, then filter out the ones where the amount is more than 1000 then give me a percentage of them that were repeated purchases" (and also a step by step prompt and wait version of this"
None of these worked. Failed every time. Here are the logs:
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Start 7125adb3-0617-4031-8b42-b711909d45d9::anythingllm_ollama:osmosis-mcp-4b:latest","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached websocket plugin to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached chat-history plugin to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attaching user and default agent to Agent cluster.","service":"backend"}
{"level":"info","message":"\u001b[36m[s]\u001b[0m MCP Servers already running, skipping boot.","service":"backend"}
{"level":"info","message":"\u001b[36m[s]\u001b[0m MCP Servers already running, skipping boot.","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached MCP::shopify:get-products MCP tool to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached MCP::shopify:get-product-by-id MCP tool to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached MCP::shopify:get-customers MCP tool to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached MCP::shopify:get-orders MCP tool to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached MCP::shopify:get-order-by-id MCP tool to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached MCP::shopify:update-order MCP tool to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached MCP::shopify:get-customer-orders MCP tool to Agent cluster","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m Attached MCP::shopify:update-customer MCP tool to Agent cluster","service":"backend"}
{"level":"error","message":"fetch failed TypeError: fetch failed\n at Object.fetch (node:internal/deps/undici/undici:11457:11)\n at async post (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\node_modules\\ollama\\dist\\browser.cjs:135:20)\n at async Ollama.processStreamableRequest (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\node_modules\\ollama\\dist\\browser.cjs:281:22)\n at async #e (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\server.js:294:3429)\n at async Ic.functionCall (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\server.js:280:931)\n at async Ic.complete (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\server.js:294:3605)\n at async Bc.handleExecution (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\server.js:313:406)\n at async Bc.reply (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\server.js:313:274)\n at async Bc.chat (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\server.js:294:35255)\n at async Bc.start (C:\\Users\\Pulkit\\AppData\\Local\\Programs\\AnythingLLM\\resources\\backend\\server.js:294:34689)","service":"backend"}
{"level":"info","message":"\u001b[36m[AgentHandler]\u001b[0m End 7125adb3-0617-4031-8b42-b711909d45d9::anythingllm_ollama:osmosis-mcp-4b:latest","service":"backend"}
{"level":"info","message":"\u001b[36m[NativeEmbedder]\u001b[0m Initialized","service":"backend"}
Just to make sure that the model is working fine without MCPs I tried out this prompt:
Here are the logs:
"level":"info","message":"\u001b[32m[TELEMETRY SENT]\u001b[0m {\"event\":\"workspace_thread_created\",\"distinctId\":\"786460ff-7b19-4ff5-8802-4c7f7aa111db\",\"properties\":{\"multiUserMode\":false,\"LLMSelection\":\"anythingllm_ollama\",\"Embedder\":\"native\",\"VectorDbSelection\":\"lancedb\",\"TTSSelection\":\"native\",\"LLMModel\":\"--\"}}","service":"backend"}
{"level":"info","message":"\u001b[32m[Event Logged]\u001b[0m - workspace_thread_created","service":"backend"}
{"level":"info","message":"\u001b[36m[NativeEmbedder]\u001b[0m Initialized","service":"backend"}
{"level":"info","message":"\u001b[35m[TokenManager]\u001b[0m Initialized new TokenManager instance for model: osmosis-mcp-4b:latest","service":"backend"}
So it must be something either on my setup or the model that's failing with tool calling.
To verify the validity of my setup I switched to Azure OpenAI and Anthropic models and tested the same prompt with no changes to any other configuration and that worked fine everytime:
So now I'm wondering if I'm missing something in the prompt, or if AnythingLLM is not the right tool to use with this model, in which case please suggest the MCP clients/CLIs your team used for testing this model
The mcp calls are outputted in the format of openai function calls. We've mostly been building and testing to be compatible with lm studio, can you try with that client instead of ollama?