Function Call Format Error in Multi-turn Dialogues
Describe the bug
When using Kimi-K2 for multi-turn conversations involving function calls (tool calls), the model sometimes returns incorrect or malformed JSON, or the output does not conform to the expected MCP (Model Context Protocol) format. This issue typically occurs when making multiple consecutive function calls within a single conversation context.
In most cases, the model API returns the message["content"]
field as a plain string, such as:
"content": "This is a normal response."
However, in some situations—especially when using function call/tool call—the API response is a serialized list string:
"content": "[{'type': 'text', 'text': 'Actual response content'}]"
This looks like a Python-style list of message objects, rather than a standard string.
Is this because, in certain cases, the model wraps the response in a format similar to OpenAI's multi-modal message format (where each message has a type
, e.g., 'text', 'image_url', etc.)? If so, is this intended, or is it a bug in the function call output handling?
I have already used the latest tokenizer_config.json
and tokenization_kimi.py
from the official repository (as of 7.18), but the issue still persists.
@bigmoyan I am also seeing this problem after locally hosting the model. Running on swebench results in poor performance. This is on the newest version of vLLM (nightly build).