-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 35 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Attention Is All You Need
Paper • 1706.03762 • Published • 41 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 239
Collections
Discover the best community collections!
Collections including paper arxiv:1810.04805
-
Nemotron-4 15B Technical Report
Paper • 2402.16819 • Published • 42 -
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 52 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 12 -
Reformer: The Efficient Transformer
Paper • 2001.04451 • Published
-
Word Alignment by Fine-tuning Embeddings on Parallel Corpora
Paper • 2101.08231 • Published • 1 -
Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation
Paper • 2009.09359 • Published • 1 -
Unsupervised Multilingual Alignment using Wasserstein Barycenter
Paper • 2002.00743 • Published -
Sinhala-English Word Embedding Alignment: Introducing Datasets and Benchmark for a Low Resource Language
Paper • 2311.10436 • Published
-
Attention Is All You Need
Paper • 1706.03762 • Published • 41 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
-
Attention Is All You Need
Paper • 1706.03762 • Published • 41 -
MetaGPT: Meta Programming for Multi-Agent Collaborative Framework
Paper • 2308.00352 • Published • 2 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Paper • 1906.08237 • Published
-
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 9 -
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Paper • 1808.06226 • Published • 1
-
Attention Is All You Need
Paper • 1706.03762 • Published • 41 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Improving Text Embeddings with Large Language Models
Paper • 2401.00368 • Published • 79 -
Gemini: A Family of Highly Capable Multimodal Models
Paper • 2312.11805 • Published • 45
-
SMOTE: Synthetic Minority Over-sampling Technique
Paper • 1106.1813 • Published • 1 -
Scikit-learn: Machine Learning in Python
Paper • 1201.0490 • Published • 1 -
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Paper • 1406.1078 • Published -
Distributed Representations of Sentences and Documents
Paper • 1405.4053 • Published
-
Improving Text Embeddings with Large Language Models
Paper • 2401.00368 • Published • 79 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Metadata Might Make Language Models Better
Paper • 2211.10086 • Published • 4 -
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Paper • 2310.03686 • Published • 3
-
Mistral 7B
Paper • 2310.06825 • Published • 47 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 19 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14