Skip to yearly menu bar Skip to main content


Poster

Efficient Inference of Vision Instruction-Following Models with Elastic Cache

ZUYAN LIU · Benlin Liu · Jiahui Wang · Yuhao Dong · Guangyi Chen · Yongming Rao · Ranjay Krishna · Jiwen Lu

# 181
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Project Page ] [ Paper PDF ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

In the field of instruction-following large language models (LLMs), especially those extended to multimodal spaces, the efficient deployment of these models faces challenges, notably due to the high memory demands of their key-value (KV) caches. Conventional cache management strategies for LLMs focus on cache eviction, which often fails to address the specific needs of multimodal instruction-following models. Recognizing this gap, in this paper, we introduce Elastic Cache, a novel approach that benefits from applying distinct acceleration methods for instruction encoding and output generation stages. We investigate the metrics of importance in different stages and propose an ‘importance-driven cache merging’ strategy to prune redundancy caches. Instead of discarding less important caches, our strategy identifies important key/value vectors as anchor points. Surrounding less important caches are then merged with these anchors, enhancing the preservation of contextual information in the KV caches while yielding an arbitrary acceleration ratio. For instruction encoding, we utilize the frequency to evaluate the importance of caches. Regarding output generation, we prioritize tokens based on their ‘distance’ with an offset, by which both the initial and most recent tokens are retained. Our approach has been validated on a range of vision instruction-following models. Results demonstrate that Elastic Cache not only boosts efficiency but also notably outperforms existing pruning methods in language generation across various tasks and models. Code and model weights will made public upon acceptance.

Live content is unavailable. Log in and register to view live content