failed open llm leaderboard benchmark

#890
by legolasyiu - opened

Hello team

Plus, please, see link below, it failed. Please assist.

https://huggingface.co/datasets/open-llm-leaderboard/requests/blob/main/EpistemeAI/Fireball-Mistral-Nemo-Base-2407-v1-DPO2_eval_request_False_float16_Original.json

See below:
{
"model": "EpistemeAI/Fireball-Mistral-Nemo-Base-2407-v1-DPO2",
"base_model": "unsloth/Mistral-Nemo-Base-2407-bnb-4bit",
"revision": "60231a00973d623409e5c2dca4700ab1e4f05a4b",
"precision": "float16",
"params": 12.248,
"architectures": "MistralForCausalLM",
"weight_type": "Original",
"status": "FAILED", <-------------------------------------------------------- show error
"submitted_time": "2024-08-19T22:57:26Z",
"model_type": "\ud83d\udcac : \ud83d\udcac chat models (RLHF, DPO, IFT, ...)",
"job_id": -1,
"job_start_time": null,
"use_chat_template": true,
"sender": "legolasyiu"
}

Could you please help? When will your cluster be available?

hmm.. I have the same problem with my model. "MistralForCausalLM" DZgas/GIGABATEMAN-7B
Just FAILED... No explanation or error code. A model that works identically for me everywhere, suddenly it doesn't work here. I don't understand too

Open LLM Leaderboard org
edited 29 days ago

Hi @legolasyiu ,
You already opened an issue for this less than a day ago, which was already solved by @alozowski here!

So:

  1. Do not duplicate issues! It's adding useless work for the team, if we have to read/manage the same info several times, and it will make us slower to answer, for no good reason
  2. Staff at Hugging Face is very much world-wide, so we're likely not in the same time zone as you are. Please be patient - expect a couple days before we manage your issue (though we often strive to be more efficient if we can).

@DZgas Please don't comment on other user's issues and open your own so we can follow what's remaining to fix. We also need the link to the request file to investigate.

clefourrier changed discussion status to closed

Sign up or log in to comment