Spaces:
Running
on
CPU Upgrade
Evaluation results doesn't appear
Model: Contamination/contaminated_proof_7b_v1.0
It seems there was not any failure based on this page. (https://huggingface.co/datasets/open-llm-leaderboard/requests/blob/main/Contamination/contaminated_proof_7b_v1.0_eval_request_False_float16_Original.json)
But there's no results on this page(https://huggingface.co/datasets/open-llm-leaderboard/results/tree/main).
Also parameter size in the log in requests page seems different with pretrained model "mistral" which has "params": 7.242.
It might be the reason I didn't uploaded model with safe-tensor, but I'm not sure.
Please help this.
Thank you.
Hi!
Thanks for your issue!
The precise parameter size is extracted from the safetensors weights, so if you did not upload your model in safetensors, the parameter size is parsed from the model name instead. Please try to upload your model in safetensors next time, as requested in the Submission steps.
For the results, it would seem we had a small issue with them being pushed to the hub after running, it should be solved today. (cc @SaylorTwift )
Feel free to reopen this issue tomorrow if there is still any problem.