File size: 2,255 Bytes
b7d19de
2d1e7e9
b7d19de
 
 
 
2d1e7e9
 
b7d19de
 
 
 
 
2d1e7e9
b7d19de
70d3a89
b7d19de
 
1bd865e
2d1e7e9
 
 
 
 
 
 
8ba1684
2d1e7e9
 
f820fc5
2d1e7e9
 
 
 
 
 
 
 
 
 
 
8ba1684
2d1e7e9
 
8ba1684
2d1e7e9
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---

![Eurydice 24b Banner](https://cdn-uploads.huggingface.co/production/uploads/652c2a63d78452c4742cd3d3/Hm_tg4s0D6yWmtrTHII32.png)

# Eurydice 24b v1c πŸ§™β€β™‚οΈ


Eurydice 24b v1c is built on an enhanced dataset than the previous version & is designed to be the perfect companion for multi-role conversations. It demonstrates exceptional contextual understanding and excels in creativity, natural conversation and storytelling. Built on Mistral 3.1, this model has been trained on a custom dataset specifically crafted to enhance its capabilities.

## Model Details πŸ“Š

- **Developed by:** Aixon Lab
- **Model type:** Causal Language Model
- **Language(s):** English (primarily), may support other languages
- **License:** Apache 2.0
- **Repository:** https://huggingface.co/aixonlab/Eurydice-24b-v1c

## Quantization
- **GGUF:** https://huggingface.co/mradermacher/Eurydice-24b-v1c-GGUF

## Model Architecture πŸ—οΈ

- **Base model:** mistralai/Mistral-Small-3.1-24B-Instruct-2503
- **Parameter count:** ~24 billion
- **Architecture specifics:** Transformer-based language model

## Intended Use 🎯
As an advanced language model for various natural language processing tasks, including but not limited to text generation (excels in chat), question-answering, and analysis.

## Ethical Considerations πŸ€”
As a model based on multiple sources, Eurydice 24b v1c may inherit biases and limitations from its constituent models. Users should be aware of potential biases in generated content and use the model responsibly.

## Performance and Evaluation
Performance metrics and evaluation results for Eurydice 24b v1c are yet to be determined. Users are encouraged to contribute their findings and benchmarks.

## Limitations and Biases
The model may exhibit biases present in its training data and constituent models. It's crucial to critically evaluate the model's outputs and use them in conjunction with human judgment.

## Additional Information
For more details on the base model and constituent models, please refer to their respective model cards and documentation.