Paper page - Fewer Truncations Improve Language Modeling (original) (raw)

Published on Apr 16, 2024

Abstract

Best-fit Packing method optimizes document packing in large language models to enhance performance and reduce hallucinations.

In large language model training, input documents are typically concatenated together and then split into sequences of equal length to avoid padding tokens. Despite its efficiency, the concatenation approach compromises data integrity -- it inevitably breaks many documents into incomplete pieces, leading to excessive truncations that hinder the model from learning to compose logically coherent and factually consistent content that is grounded on the complete context. To address the issue, we propose Best-fit Packing, a scalable and efficient method that packs documents into training sequences throughlength-aware combinatorial optimization. Our method completely eliminates unnecessary truncations while retaining the same training efficiency as concatenation. Empirical results from both text and code pre-training show that our method achieves superior performance (e.g., relatively +4.7% on reading comprehension; +16.8% in context following; and +9.2% on program synthesis), and reduces closed-domain hallucination effectively by up to 58.3%.

View arXiv page View PDF Add to collection

Get this paper in your agent:

hf papers read 2404.10830

Don't have the latest CLI?

curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 13

sbintuitions/modernbert-ja-130m Fill-Mask • 0.1B • Updated May 1, 2025 • 12.3k • 48

DiscoResearch/Llama3-German-8B Text Generation • 8B • Updated Sep 10, 2024 • 825 • 39

sbintuitions/modernbert-ja-310m Fill-Mask • 0.3B • Updated May 1, 2025 • 7.71k • 23

DiscoResearch/Llama3-German-8B-32k Text Generation • 8B • Updated Sep 10, 2024 • 13 • 14

Browse 13 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.10830 in a dataset README.md to link it from this page.

Spaces citing this paper 10

Collections including this paper 1