-
Notifications
You must be signed in to change notification settings - Fork 2.6k
feat: Add in-memory cache with TTL support #3831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Summary of ChangesHello @peaaceChoi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces an in-memory caching mechanism with Time-To-Live (TTL) support to the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Response from ADK Triaging Agent Hello @peaaceChoi, thank you for creating this PR! It looks like you have not yet signed the Contributor License Agreement (CLA). Please visit https://cla.developers.google.com/ to sign the agreement. Once the CLA is signed, the Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces an in-memory cache with TTL support for McpToolset, which is a great feature for improving performance and reducing server load. The implementation is mostly correct, and the accompanying tests cover the new functionality well.
My review includes a few suggestions:
- A fix for a race condition in the cache implementation to prevent cache stampedes under concurrent access.
- A suggestion to improve the structure of the cache validation logic for better readability and maintainability.
- An improvement for the new tests to make them more robust and faster by mocking time instead of using
asyncio.sleep().
Overall, this is a good addition. Addressing these points will make the caching mechanism more robust and the tests more reliable.
| def _is_cache_valid(): | ||
| if not self._cache or not self._cached_tool_response: | ||
| return False | ||
|
|
||
| if self._cache_ttl_seconds is None: | ||
| return True # No TTL set, consider cache always valid | ||
|
|
||
| if self._cache_creation_time is None: | ||
| return False # This should not happen in a well-initialized system | ||
|
|
||
| elapsed = time.monotonic() - self._cache_creation_time | ||
| return elapsed <= self._cache_ttl_seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _is_cache_valid function is defined inside get_tools. While this works, moving it to a private method of the McpToolset class (e.g., _is_cache_valid(self)) would improve code organization and readability. It makes the get_tools method shorter and more focused on its main logic. It also makes the cache validation logic reusable and easier to test in isolation if needed in the future.
| await toolset.get_tools() | ||
| mock_session.list_tools.assert_called_once() | ||
|
|
||
| await asyncio.sleep(1.1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using asyncio.sleep() to test time-dependent logic can make tests slow and potentially flaky. A more robust approach is to mock the time source, time.monotonic. This gives you precise control over time in your test, making it faster and more reliable.
You can use mocker.patch from pytest-mock to do this. Here's an example of how you could rewrite this test:
@pytest.mark.asyncio
async def test_mcp_toolset_cache_with_ttl_expired(mocker):
"""Test that list_tools is called again after TTL expires."""
mock_connection_params = MagicMock()
mock_connection_params.timeout = None
mock_session_manager, mock_session = _create_mock_session_manager()
toolset = McpToolset(
connection_params=mock_connection_params, cache=True, cache_ttl_seconds=1
)
toolset._mcp_session_manager = mock_session_manager
# Patch time.monotonic
mock_time = mocker.patch('time.monotonic')
# First call, populates cache
mock_time.return_value = 1000.0
await toolset.get_tools()
mock_session.list_tools.assert_called_once()
# Second call, after TTL expired
mock_time.return_value = 1001.1 # More than 1 second later
await toolset.get_tools()
assert mock_session.list_tools.call_count == 2|
Hi @peaaceChoi , Thank you for your contribution! It appears you haven't yet signed the Contributor License Agreement (CLA). Please visit https://cla.developers.google.com/ to complete the signing process. Once the CLA is signed, we'll be able to proceed with the review of your PR. Thank you! |
Link to Issue or Description of Change
1. Link to an existing issue (if applicable):
Testing Plan
Unit Tests:
I have added test cases related to caching to /tests/unittests/tools/test_mcp_toolset.py.
test_mcp_toolset_cache_disabled
test_mcp_toolset_cache_enabled
test_mcp_toolset_cache_with_ttl_not_expired
test_mcp_toolset_cache_with_ttl_expired
test_mcp_toolset_cache_concurrency
I have added or updated unit tests for my change.
All unit tests pass locally.
Please include a summary of passed
pytestresults.Checklist