Edit model card

Uploaded model

Inference

pip install transformers
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/Facebook-Bart-Qna")
model = AutoModelForSeq2SeqLM.from_pretrained("Mr-Vicky-01/Facebook-Bart-Qna")

def generate_answer(text):
    inputs = tokenizer([text], return_tensors='pt', truncation=True)
    summary_ids = model.generate(inputs['input_ids'], max_length=512)
    summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
    return summary

text_to_summarize = """Please answer this question: What is Artifical Intelligence?"""
summary = generate_answer(text_to_summarize)
print(summary)
Downloads last month
1
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Dataset used to train Mr-Vicky-01/Facebook-Bart-Qna