No Baseline (yet?)

#2
by LordTwave - opened

If you also wanted to prove contamination was a "bad" thing for a model's customers, you should also be working on a model which uses parallel "equal" dataset (but not the same!)

Actually, contamination itself doesn't affect the model's performance, but it can make the model achieve high scores on specific datasets, like the benchmark dataset of the HuggingFace leaderboard.
However, this model cannot maintain such performance on other tasks, which is what people usually desire. (They don't want to use models that only perform well on benchmark datasets.)
This is detrimental for a model's customers, as it can confuse or mislead them.

If you want to know the possible scores with an equal dataset, I recommend checking the [HuggingFaceH4/zephyr-7b-beta] (https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) model.
This model was trained on UltraChat data, similar to my model, and also utilized DPO, which I did not.
It is not exactly the model you inquired about, but it is similar. However, the scores are significantly different.

The actual performance of my model is much lower than that of zephyr-7b-beta, even though the leaderboard scores are much higher

Sign up or log in to comment