Edit model card

34b

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: cognitivecomputations/dolphin-2.2-yi-34b-200k
        layer_range: [0, 48]
      - model: abacusai/Smaug-34B-v0.1
        layer_range: [0, 48]
# or, the equivalent models: syntax:
# models:
#   - model: psmathur/orca_mini_v3_13b
#   - model: garage-bAInd/Platypus2-13B
merge_method: slerp
base_model: abacusai/Smaug-34B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16
Downloads last month
4
Safetensors
Model size
27.7B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for macadeliccc/SmaugDolphin-34B-slerp