-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Reuse KV cache of prefixes #5572
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
18 commits
Select commit
Hold shift + click to select a range
696e3b8
reuse prefix of kv cache
2e8ac1c
free block with no ref
619a363
reversed block id list when freeing
7307507
add option to enable prefix cache
a28b706
fix use of prefix cache
d8e9d28
fix allocation bug
9cffae1
use normal allocator if prefix sharing is disabled
f10f6f4
match type hint
0bedc39
refactor allocator
bc92d2b
simplify loop
c2028ec
refactor
959204c
remove unnecessary reverse
9289ff7
fix attribute name
30b9d0b
Merge branch 'master' into tohtana/cache_prefix
tohtana 0c8e0e6
update prefix cache at every iteration
ea50fb5
skip looking up cache
1493ab7
fix prefix tokens to cache
07a4c44
add assertion
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,56 @@ | ||
| # Copyright (c) Microsoft Corporation. | ||
| # SPDX-License-Identifier: Apache-2.0 | ||
|
|
||
| # DeepSpeed Team | ||
|
|
||
| from typing import Dict, List, Set | ||
| import hashlib | ||
|
|
||
| import torch | ||
|
|
||
|
|
||
| def token_ids_to_hash(token_ids: torch.Tensor): | ||
| # Convert the tensor to bytes | ||
| tensor_bytes = token_ids.numpy().tobytes() | ||
| hash_obj = hashlib.sha256() | ||
| # Update the hash object with the bytes | ||
| hash_obj.update(tensor_bytes) | ||
| # Get the hexadecimal digest of the hash | ||
| return hash_obj.hexdigest() | ||
|
|
||
|
|
||
| class PrefixBlockMap(): | ||
|
|
||
| def __init__(self, block_size: int): | ||
| self.tokens_to_blocks: Dict[str, List[int]] = {} | ||
| self.blocks_to_tokens: Dict[Set[int], str] = {} | ||
| self.block_size: int = block_size | ||
|
|
||
| def lookup(self, tokens: torch.Tensor) -> torch.Tensor: | ||
| n_blocks = len(tokens) // self.block_size | ||
| cached_blocks = torch.tensor([], dtype=torch.int32) | ||
| for i in range(n_blocks): | ||
| chunk = tokens[:(i + 1) * self.block_size] | ||
| hash = token_ids_to_hash(chunk) | ||
| if hash in self.tokens_to_blocks: | ||
| cached_blocks = self.tokens_to_blocks[hash] | ||
| else: | ||
| break | ||
| return cached_blocks | ||
|
|
||
| def extend(self, tokens: torch.Tensor, new_block_ids: List[int], num_already_cached_blocks: int) -> None: | ||
| n_blocks = len(tokens) // self.block_size | ||
| for i in range(num_already_cached_blocks, n_blocks): | ||
| chunk = tokens[:(i + 1) * self.block_size] | ||
| hash = token_ids_to_hash(chunk) | ||
| if hash not in self.tokens_to_blocks: | ||
| self.tokens_to_blocks[hash] = new_block_ids[:i + 1] | ||
| self.blocks_to_tokens[frozenset(new_block_ids[:i + 1])] = hash | ||
|
|
||
| def delete(self, block_ids: List[int]) -> None: | ||
| blocks_set = frozenset(block_ids) | ||
| for used_blocks, hash in self.blocks_to_tokens.items(): | ||
| # check intersection | ||
| if blocks_set & used_blocks: | ||
| del self.tokens_to_blocks[hash] | ||
| del self.blocks_to_tokens[used_blocks] | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this if statement if in the for loop you are already starting from num_already_cached_blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if it is necessary but there might be a complicated case. Assume we are running two requests.
Request 1 is using cached blocks 0, 1, 2, and just generated the last token of the current block. Then it saves the generated sequence and a hash.
Request 2 is also using the cached block 0, 1, 2, but the generation is a few steps later than Request 1. They are not sharing the last block. But the request may generate exact same tokens for the last block. So, it will try to update the cache with the same hash.
In this case, I didn't want to overwrite the cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good, so this is to ensure we are sharing the recently generated blocks.