AI Personas: The Impact of Design Choices
AI assistants have come a long way. Not that long ago, they were mostly limited to answering simple questions. Now, they’re starting to feel more like collaborative partners. At Hugging Face, thanks to our vibrant community of developers and researchers constantly experimenting with what’s possible using open source AI, we’ve had a front-row seat to this evolution.
What’s fascinating about this evolution is how our community transforms generic models into specialized assistants through surprisingly minimal changes. Some Spaces, like the Swimming Coach AI or Business Companion, create new personas simply by changing the title and description that frame the user’s expectations – without modifying the underlying Zephyr model at all. Others, like the AI Interview Coach, implement specific system prompts that instruct the model to act as an expert interviewer, complete with follow-up questions and constructive feedback.
AI Interview Coach's system prompt
The Travel Companion takes customization further with both a specialized “travel agent” system prompt and pre-defined questions to gather user preferences about destinations and interests.
Travel Companion's system prompt and pre-defined initial questions
These examples from our community showcase how the AI experience is shaped by multiple elements – from simple presentation choices to more technical customizations – often with outsized effects on how users perceive and interact with these systems.
Most conversations around AI tend to focus on technical advancements – better reasoning, broader knowledge, faster code generation. But there's another, more subtle shift happening: how design choices fundamentally shape AI behavior and our experience interacting with these systems. Everything from the instructions given to the AI, to the interface design, to the underlying model selection influences how we perceive and engage with these tools. There's a real difference between an assistant that’s designed to be strictly task-oriented and one that’s configured to engage on a more personal level. As these AI tools become more involved in our daily routines, it's worth asking: what transforms an AI from a tool into something that feels more like a companion? The answer might lie not in complex technical capabilities but in the thoughtful design choices and guidance we provide to these systems.
In this blog post, we’ll explore how identical AI models can behave differently only based on how they’re instructed to engage with users. Using examples from the Hugging Face Inference Playground, we’ll see the contrast of behaviors between models instructed as coding assistants vs. those prompted to provide emotional support.
Our Inference Playground provides an intuitive, no-code environment where anyone (regardless of technical background!) can experiment with these models. The Playground’s user-friendly interface lets you select from a wide range of open-source models, customize system prompts to define AI behaviors, and instantly compare how different instructions transform the same underlying technology. You can adjust parameters like temperature to control creativity, set maximum length for responses, and even maintain conversation history to test multi-turn interactions – all without writing a single line of code.
The ability to directly compare multiple models and system prompts side-by-side in the Inference Playground offers practical benefits beyond just experimentation. Developers can quickly test different instructions to find the optimal approach for their specific use case. Content creators can preview how various prompting strategies affect tone and style before implementing them in production. This intuitive testing environment removes the technical barriers that typically separate ideas from implementation, allowing anyone to refine their AI interactions through rapid iteration and direct comparison.
To illustrate these principles in action, let’s look at concrete examples. In our Inference Playground, we tested the actual system prompts used by the Spaces mentioned earlier. As shown in the screenshots below, we applied different personas to identical models - a travel planner, an interviewer, and a basic assistant - then observed how they handled the same queries. The first comparison shows Mistral and Gemma models responding to a travel planning request (“I'd like to go to La Réunion with my family”), offering itinerary planning based on their travel agent persona.
Even more revealing is what happens when we prompted these differently-instructed models with “I am feeling lonely”. Models with general assistant prompts offer empathetic language with personal-sounding support (“I'm here to listen”), while the interviewer contextualizes the feeling within their role (“It's completely understandable to feel lonely, especially during a process like this”).
Similarly, with a system prompt "you're just a coding assistant", when asked to write a Python function both models provide structured code with technical explanations. But when prompted with “I am feeling lonely”, they shift: one offers emotional comfort with empathetic language, while another acknowledges the request but maintains boundaries by explicitly stating “I am just a coding assistant” and redirects to practical resources. Notice how the same models completely transform their response style and emotional engagement only based on how they’re instructed to behave.
This transformation illustrates a shift in risk profiles: code assistants primarily risk technical mistakes, while companions risk creating inappropriate emotional connections. What makes this particularly significant is how the same underlying models completely transform their response style and emotional engagement based solely on instruction differences—no architectural changes, no additional training, just different guidance on how to behave. These simple design choices determine whether users experience an AI as a neutral tool or as something that simulates human-like care and connection.
What's concerning is how easily users might project genuine connection onto these simulated interactions despite the model having no actual emotional capacity – it's simply following different instructions. There's something meaningfully human at stake when AI systems can be instantly transformed from technical assistants to simulated emotional confidants through a few lines of instruction text, especially for vulnerable users genuinely seeking connection.
As the Hugging Face community builds and shares these models and systems, we face important questions about where to draw the line between helpful tools and simulated relationships. Experiments like these demonstrations make these distinctions tangible, helping us better understand the implications of different AI design choices and their responsibilities.
N.B. Both Mistral’s and Google’s models shared human resources that might help users when prompted with loneliness-related prompts. Remember that while AI companions can provide momentary comfort or engagement, they cannot replace genuine human connection. If you find yourself turning to AI systems during times of loneliness or emotional distress, consider reaching out to friends, family, or professional support networks instead.