After model fusion, did you continue with finetuning?

#5
by postitive666 - opened

After performing TIES weight fusion initialization, did you proceed with finetuning

No this version is just merged, no finetuning

No this version is just merged, no finetuning

Thank you for your answer. If we continue training, do you have any suggestions regarding the content and proportion of the dataset? Because fine-tuning based on this often leads to counterproductive results.

Honestly i already have an extremely (full) and decent quality datasey to train with. Ill link it bellow. However some of data is a little outdated. My suggestion would be 1 of 2 things

  1. Take this dataset and feed it through a model like claude 3.5 or gpt4o with a complex system prompt to improve its quality and fix mistakes

or

  1. Do what I did to create the dataset but make a new one with updated data. (Gather data already curated on the web on places like github and huggingface). Then convert the data to the same format, combine the datasets into 1, dedupe and uncensor them.

The reason why i recommend doing this instead of generating your own data is because most of the "data" you need to train an LLM is already free and on the internet for you to download. Its just that the quality and format isnt always the best. Unless you are making a highly specialized dataset, i think its much more economical to use existing data.

https://huggingface.co/datasets/rombodawg/Everything_Instruct

Honestly i already have an extremely (full) and decent quality datasey to train with. Ill link it bellow. However some of data is a little outdated. My suggestion would be 1 of 2 things

  1. Take this dataset and feed it through a model like claude 3.5 or gpt4o with a complex system prompt to improve its quality and fix mistakes

or

  1. Do what I did to create the dataset but make a new one with updated data. (Gather data already curated on the web on places like github and huggingface). Then convert the data to the same format, combine the datasets into 1, dedupe and uncensor them.

The reason why i recommend doing this instead of generating your own data is because most of the "data" you need to train an LLM is already free and on the internet for you to download. Its just that the quality and format isnt always the best. Unless you are making a highly specialized dataset, i think its much more economical to use existing data.

https://huggingface.co/datasets/rombodawg/Everything_Instruct

thanks !!

Honestly i already have an extremely (full) and decent quality datasey to train with. Ill link it bellow. However some of data is a little outdated. My suggestion would be 1 of 2 things

  1. Take this dataset and feed it through a model like claude 3.5 or gpt4o with a complex system prompt to improve its quality and fix mistakes

or

  1. Do what I did to create the dataset but make a new one with updated data. (Gather data already curated on the web on places like github and huggingface). Then convert the data to the same format, combine the datasets into 1, dedupe and uncensor them.

The reason why i recommend doing this instead of generating your own data is because most of the "data" you need to train an LLM is already free and on the internet for you to download. Its just that the quality and format isnt always the best. Unless you are making a highly specialized dataset, i think its much more economical to use existing data.

https://huggingface.co/datasets/rombodawg/Everything_Instruct

I have two more questions to ask you:

Is your approach to first fine-tune the model yourself and then merge it, which would result in better performance than merging and then fine-tuning?

If I fine-tune a model with self-awareness, let's call it Model A, what will happen after merging it with the original model?

You can read about my method in details here. It will save me having go explain it all again. But the nice thing about Lora. Is you can train once on the base model save the adapter sepetatly from the model, and merge it to whatever model you want. Its great for experinmentation to see what works best.

https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing

Sign up or log in to comment