grimjim's picture
Update README.md
edf4e3c verified
metadata
base_model: grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
library_name: transformers
quanted_by: grimjim
license: cc-by-nc-4.0
pipeline_tag: text-generation
model-index:
  - name: grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 68.69
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 86.87
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64.87
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 65.47
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 80.03
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 63.31
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
          name: Open LLM Leaderboard

zephyr-wizard-kuno-royale-BF16-merge-7B-GGUF

This is an experimental merge of pre-trained language models created using mergekit. All source model weights are BF16, avoiding issues arising from mixed-precision merges.

Although the zephyr beta and WizardLM 2 7B models are touted as SOTA and can generate varied prose compared to base Mistral v0.1, their relatively mediocre benchmarks under GSM-8K suggests only average reasoning capability in one-shot narrative text completion. The kuno-royale-v2 model was selected for merger because of its higher GSM-8K rating.

Native prompt format is Alpaca, although at least one of the prior models was fine-tuned to Vicuna.

Tested lightly with ChatML instruct prompts, temperature 1, and minP 0.02.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: grimjim/zephyr-beta-wizardLM-2-merge-7B
      layer_range: [0,32]
    - model: core-3/kuno-royale-v2-7b
      layer_range: [0,32]
merge_method: slerp
base_model: grimjim/zephyr-beta-wizardLM-2-merge-7B
parameters:
  t:
    - value: 0.5
dtype: bfloat16