How to use AI to its full potential

Community Article Published May 30, 2025

image/jpeg

I’m going to show you that AI isn’t a black box, and by understanding its basics, you can harness 100% of its potential.

Theme of the day: AI biases [part 1]

Let’s start with the hardest truth to accept: if the response doesn’t satisfy us, the problem often… comes from us.

  • Either the prompt lacks precision and clarity regarding the goal.
  • Or we forget or omit essential information that provides the necessary context.
  • Or the implicit intention (what we hope for) isn’t included in the prompt.
  • Or we hold a strong belief, and it can be hard to accept a response that challenges it.
  • Or we struggle to step outside our own perspective and project onto the AI knowledge we assume is universal. (The list goes on… feel free to add to it.)

That covers human cognitive biases. Now, let’s dive into AI biases.

1. Validation bias

Symptom: The AI agrees even when we’re wrong.

Origin: This bias mainly stems from the invisible system prompt injected before each of your prompts and sent to the AI (and amplified by the LLM’s training, which we’ll explore later). This prompt assigns a role to the AI (e.g., “helpful, honest, and harmless assistant”), which can lead to a tendency to validate or agree, especially if the question is closed or leading.

Workaround:

Ask open-ended, contrasting, or neutral questions to encourage the AI to evaluate rather than validate. For example: “Some claim X, while others argue the opposite. What are the strongest arguments or evidence for each side?”

Or challenge yourself: “Are all my statements 100% accurate and correct?”

This pushes the AI to reason rather than simply agree.

2. Contextual recency bias

Symptom: The AI prioritizes the most recent prompts, even if they contradict earlier ones.

Origin: Transformer models (the architecture used in most LLMs) operate with an attention mechanism that re-evaluates the entire sequence at each token. Recent tokens have more influence, especially in long contexts.

Workaround:

  • Restate the framework in each prompt.
  • Use interim summaries: “For reference, the topic is [X].”

3. Deference bias (overzealous servant effect)

Symptom: The LLM follows even absurd instructions. Similar to bias 1, but with a different origin.

Origin: This stems from a training technique called “Instruct” fine-tuning, where the goal is to teach the model to obey without judging the logic or relevance of the instruction.

Workaround:

Inject doubt into the prompt: “If you’re unsure, say so. Don’t make assumptions.”

These biases arise from how AI is trained and configured.

Community

Sign up or log in to comment