Some problems were found during the initial experience -- is it hallucinations or probs caused by MoE
I’m truly delighted to come across this model, which feels like a wonderful gift for International Workers' Day. I tested it using the API from Openrouter and started with some simple prompts, such as introducing yourself or asking, "Who are you?" However, I noticed something unusual. The model began responding in a way that mimicked human behavior (for example, providing the name of a girl and discussing her favorite things). When I retried multiple times (in a new chat to ensure no memory influenced its responses), I observed that it started generating code (once it wrote a Python program returning f"Hello, {name}!"
, and another time it generated a neural network). Moreover, it frequently mentioned names of other models, such as Carla or those created by Moonshot. This led me to wonder: Is its ability to solve math problems (or MoE structure) causing difficulties in text-based interactions, or could these be instances of hallucinations? I would greatly appreciate any insights or discussions on this topic.