English

gpt4all-lora-epoch-3

This is an intermediate (epoch 3 / 4) checkpoint from nomic-ai/gpt4all-lora.

An autoregressive transformer trained on data curated using Atlas. This model is trained with three epochs of training, while the related gpt4all-lora model is trained with four. Replication instructions and data: https://github.com/nomic-ai/gpt4all

Model Details

Model Description

Developed by: Nomic AI

Model Type: An auto-regressive language model based on the transformer architecture and fine-tuned.

Languages: English

License: GPL-3.0

Finetuned from: LLaMA

Model Sources

Repository: https://github.com/nomic-ai/gpt4all

Base Model Repository: https://github.com/facebookresearch/llama

Technical Report: GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train nomic-ai/gpt4all-lora-epoch-3