--- language: - en pipeline_tag: text-generation license: cc-by-nc-4.0 --- # AISquare-Instruct-llama2-koen-13b-v0.9.24 ## Model Details **Developed by** [Inswave Systems](https://www.inswave.com) UI Platform Team **Method** Using DPO method and SFT method **Hardware** We utilized an A100x4 * 1 for training our model **Base Model** [beomi/llama2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) # Implementation Code ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24" model = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---