Edit model card

Zephyr-7B-Customer-Support-Finetuned6

Introduction

This repository hosts the zephyr-7b-customer-support-finetuned6 model, a variant of the zephyr-7b-beta fine-tuned specifically for enhanced performance in customer support scenarios. It was fine-tuned using advanced techniques to ensure high accuracy in handling customer queries.

Fine-Tuning Details

The model was fine-tuned using the autotrain llm command with the following specifications:

  • Base Model: HuggingFaceH4/zephyr-7b-beta
  • Learning Rate: 2e-4
  • Batch Size: 12
  • Training Epochs: 10
  • Strategy: Soft Target Fine-Tuning (sft)
  • Evaluation: Accuracy
  • Scheduler: Cosine
  • Target Modules: q_proj, v_proj

This fine-tuning approach ensures optimal performance in interpreting and responding to customer queries.

Installation and Setup

Install the necessary packages to use the model:

pip install transformers
pip install torch

Usage

To use the fine-tuned model, follow this simple Python script:

# Import libraries
import torch
from copy import deepcopy
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import pipeline

# Load model and transformers
model = AutoModelForCausalLM.from_pretrained('HuggingFaceH4/zephyr-7b-beta')
tokenizer = AutoTokenizer.from_pretrained('HuggingFaceH4/zephyr-7b-beta')

# Load the adapter
model.load_adapter('erfanvaredi/zephyr-7b-customer-support-finetuned6')

# Load the pipeline
pipe_PEFT = pipeline(
    'text-generation',
    model = model,
    tokenizer=tokenizer
)

# Load chat template
messages = [
    {
        "role": "system",
        "content": "Act as a helpful customer support assistant, who follows user's inquiries and invoice-related problems.",
    },
    {"role": "user", "content": "tell me about canceling the newsletter subscription"},
]
prompt = pipe_PEFT.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe_PEFT(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)

# Example query
outputs = pipe_PEFT(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"].split('<|assistant|>')[1])

# Certainly! If you'd like to cancel your newsletter subscription, you can typically do so by following these steps:
# 
# 1. Look for an "Unsubscribe" or "Cancel Subscription" link at the bottom of the newsletter email you received. Click on this link to initiate the cancellation process.
# 
# 2. If you're having trouble finding the link, you can also log in to your account on the company's website or platform. Go to your account settings or preferences, and look for an option to manage or cancel your subscriptions.
#
# 3. Once you've found the cancellation link or option, follow the prompts to confirm that you want to unsubscribe. This may involve entering your email address or account information to verify your identity.
# 
# 4. After you've successfully canceled your subscription, you should stop receiving newsletters from the company. If you continue to receive emails, you may need to wait for a processing period or contact customer support for further assistance.
# 
# I hope that helps! Let me know if you have any other questions or concerns.

License

This project is licensed under the MIT License.

Contact

For inquiries or collaboration, please reach out at LinkedIn.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train erfanvaredi/zephyr-7b-customer-support-finetuned6