Edit model card
from peft import AutoPeftModelForCausalLM

path_to_adapter="macadeliccc/Samantha-Qwen-2-7B-lora"

model = AutoPeftModelForCausalLM.from_pretrained(
    # path to the output directory
    path_to_adapter,
    device_map="auto",
    trust_remote_code=True
).eval()

vpm_resampler_embedtokens_weight = torch.load(f"{path_to_adapter}/vpm_resampler_embedtokens.pt")

msg = model.load_state_dict(vpm_resampler_embedtokens_weight, strict=False)
Downloads last month
2
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for macadeliccc/Samantha-Qwen2-7B-LoRa

Base model

Qwen/Qwen2-7B
Quantized
this model

Datasets used to train macadeliccc/Samantha-Qwen2-7B-LoRa