GRAG-NEMO-12B (German Retrieval Augmented Generation)
Here you can find all the final checkpoints & datasets from training Nemo-12B Model from MistralAI & NVIDIA on the GRAG Datasets.
Question Answering • Updated • 43Note This model was trained on 20.7 Million Tokens in ORPO (Odd-Ratio-Preference Optimization) on synthetically generated or enhanced Data. Please see the GRAG-ORPO-Dataset (https://huggingface.co/datasets/avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI) for reference.
avemio/GRAG-NEMO-12B-SFT-HESSIAN-AI
Question Answering • Updated • 28 • 1Note This model was trained on 1,5 Billion Tokens in SFT(Supervised Fine-Tuning) on synthetically generated or enhanced Data. Please see the GRAG-SFT-Dataset (https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) for reference.
avemio/GRAG-NEMO-12B-CPT-HESSIAN-AI
Question Answering • Updated • 12Note This model was trained on 507,5 Million Tokens in CPT (Continued Pre-Training) on synthetically generated or enhanced Data. Please see the GRAG-CPT-Dataset (https://huggingface.co/datasets/avemio/GRAG-CPT-HESSIAN-AI) for reference.
avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI
Viewer • Updated • 13.7k • 6avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
Viewer • Updated • 1.01M • 9 • 1avemio/GRAG-CPT-HESSIAN-AI
Viewer • Updated • 654k • 6avemio/GRAG-NEMO-12B-ORPO-HESSIAN-AI-Q8_0-GGUF
Question Answering • Updated • 22avemio/GRAG-NEMO-12B-SFT-HESSIAN-AI-Q8_0-GGUF
Question Answering • Updated • 21