Valkyyrie-14b v1

Valkyyrie 14b v1 is a fine-tuned large language model based on Microsoft's Phi-4, further trained to have better conversation capabilities.

Model Details 📊

Model Architecture 🏗️

  • Base model: phi-4
  • Parameter count: ~14 billion
  • Architecture specifics: Transformer-based language model

Open LLM Leaderboard Evaluation Results

Coming Soon !

Training & Fine-tuning 🔄

Valkyyrie-14b-v1 was fine-tuned to achieve -

  1. Better conversational skills
  2. Better creativity for writing and conversations.
  3. Broader knowledge across various topics
  4. Improved performance on specific tasks like writing, analysis, and problem-solving
  5. Better contextual understanding and response generation

Intended Use 🎯

As an assistant or specific role bot.

Ethical Considerations 🤔

As a fine-tuned model based on phi-4, this model may inherit biases and limitations from its parent model and the fine-tuning dataset. Users should be aware of potential biases in generated content and use the model responsibly.

Downloads last month
4
Safetensors
Model size
14.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for aixonlab/Valkyyrie-14b-v1

Base model

microsoft/phi-4
Finetuned
(33)
this model
Quantizations
2 models

Collection including aixonlab/Valkyyrie-14b-v1