View Single Post
Old 25-04-2024, 01:37   #18
s12a
Senior Member
 
L'Avatar di s12a
 
Iscritto dal: Jan 2008
Messaggi: 11136
https://arxiv.org/abs/2404.14469
Quote:
[Submitted on 22 Apr 2024]
SnapKV: LLM Knows What You are Looking for Before Generation

Large Language Models (LLMs) have made remarkable progress in processing extensive contexts, with the Key-Value (KV) cache playing a vital role in enhancing their performance. However, the growth of the KV cache in response to increasing input length poses challenges to memory and time efficiency. To address this problem, this paper introduces SnapKV, an innovative and fine-tuning-free approach that efficiently minimizes KV cache size while still delivering comparable performance in real-world applications.
We discover that each attention head in the model consistently focuses on specific prompt attention features during generation. Meanwhile, this robust pattern can be obtained from an `observation' window located at the end of the prompts. Drawing on this insight, SnapKV automatically compresses KV caches by selecting clustered important KV positions for each attention head. Our approach significantly reduces the growing computational overhead and memory footprint when processing long input sequences. Specifically, SnapKV achieves a consistent decoding speed with a 3.6x increase in generation speed and an 8.2x enhancement in memory efficiency compared to baseline when processing inputs of 16K tokens. At the same time, it maintains comparable performance to baseline models across 16 long sequence datasets. Moreover, SnapKV can process up to 380K context tokens on a single A100-80GB GPU using HuggingFace implementation with minor changes, exhibiting only a negligible accuracy drop in the Needle-in-a-Haystack test. Further comprehensive studies suggest SnapKV's potential for practical applications.
https://twitter.com/arankomatsuzaki/...48162753425613

Diversi degli ultimi LLM rilasciati hanno supporto a contesti di grande dimensione, ma per poterli utilizzarli a dovere servono grandi quantitativi di memoria per memorizzare la KV cache (la tabella di "attenzione" del modello riguardo il contesto utilizzato). Qui è proposto un metodo per comprimere la stessa (risparmiando memoria) e contemporaneamente velocizzare anche la generazione del testo, con costi minimi riguardo la sua accuratezza.

__________________
CPU Intel i7-12700K ~ Cooler Noctua NH-D15S ~ Motherboard MSI PRO Z690-A WIFI DDR4 ~ RAM Corsair Vengeance LPX 64 GB DDR4-3600
GPU MSI GeForce RTX 3090 GAMING X TRIO 24G ~ SSD SK hynix Platinum P41 2TB + Samsung 990 Pro 4TB
PSU Corsair RM850x ~ Case Fractal Design Define C ~ Display Dell U2412M (A00) + NEC EA231WMi ~ OS
s12a è offline   Rispondi citando il messaggio o parte di esso